Informational and Low-Risk Web Findings at Scale: Headers, Cookies, and 'Quick Wins' Done Rigorously
By Shamita Senthil Kumar, Associate Consultant, Artais
Analyzing web applications with non-intrusive techniques that avoid exploitation or state-changing actions can still yield meaningful security findings. Compiling available information about a target can provide data on any potential eyebrow raising concerns such as issues with cookies, response headers, or security policies. However, in most scanning tools, these pop up as low severity and are often disregarded. This is not because the issues themselves are unimportant but rather due to them being reported without the needed context or actionable analysis.
To avoid the excessive noise from such reports, the difference is aligning the findings with real-world risk rather than looking at the raw output itself. By analyzing and interpreting the results, real vulnerabilities could surface, leading to improvement in the overall security posture. This article focuses on the steps needed to prioritize and contextualize non-intrusive findings in order to take insightful responses.
Defining A Taxonomy
What is classified as a finding...
Results from non-intrusive testing are not necessarily a checklist of missing headers or changes from typical “best practice” behaviors. Checking every configuration change or missing header could divert attention from real issues that need solutions. While findings can range in impact and severity levels, the commonality is that they pose a security risk on their own or in combination with other vulnerabilities. These findings may be exploitable in isolation or increase impact when chained with other weaknesses.
Some examples include:
An authentication cookie without a HttpOnly attribute.
Login endpoints that don't have HTTP Strict Transport Security as a policy. Enforce HTTPS early! There are other early entry points besides login.
All subdomains trusted in cross-origin resource sharing. This is often implemented as “reflect Origin if it ends with .example.com”. It’s most dangerous when paired with Access-Control-Allow-Credentials: true and a broad/reflective Access-Control-Allow-Origin, because an attacker-controlled subdomain can issue credentialed cross-origin requests and read sensitive responses. It can become high risk when a subdomain is vulnerable to subdomain takeover or an untrusted subdomain can host attacker-controlled JavaScript. Sensitive information such as credentials can be exposed in this situation.
Sensitive information submitted using GET. This could be passwords, user-identifying info, etc
Severity scores should be assigned based on context. While a scanner produces a report, the results need to be analyzed in terms of the system architecture, security boundaries, and environment scope. For example, a broad cookie domain may be safer in a tightly controlled environment but detrimental in an unmanaged larger scope of an organization - the effects are context dependent.
...versus classified as informational?
Overall, a good way to classify a result is by asking the following:
'Does this finding provide benefit to an outside attacker?'
→ If yes, then it is a finding that needs to be remediated.
→ If no, then it is informational but still could provide insight into changes needed to generally improve the security posture.
For example, there could be missing optional or deprecated headers on a website. Since the website is able to function without them and it poses no vulnerability to be exploited, this is an informational discovery. Certain scanning extensions like BurpSuite or Zap even report the presence of secure configurations i.e. a website having a proper TLS certificate. Overall, such results pose no direct risk to the users and organization nor no direct benefit to an attacker; however, there is still something to learn from them.
Informational Finding Example - Missing CSP
Consider a web application that does not enforce a Content Security Policy (CSP). Instead of sending an enforced Content-Security-Policy header, it only sends Content-Security-Policy-Report-Only, which collects violation reports but does not block content. In other words, CSP is present as a monitoring control, but it isn’t providing runtime protection yet.
Without enforced CSP, the impact of an XSS bug is typically less constrained, because the browser won’t restrict script sources or block inline/dynamic execution based on policy. CSP is primarily a mitigation; the risk severity depends on whether there’s a realistic injection path and how strong other controls are.
In this case, the lack of CSP enforcement is informational and does not constitute a vulnerability on its own unless there were an actual injection vector or other XSS-related flaw.
Contextualization of Results
Issues can be grouped by type of security flaw; most results will fall into one of the following:
Cookies & Sessions
Content Security Policy (CSP)
Security Headers
Transport Security
These account for the vast majority of non-intrusive testing results. While a lot of findings initially seem repetitive, they often require contextualization to determine whether they represent a vulnerability, an opportunity for security hardening, or an acceptable risk with other compensating controls. Below are some cases of the categories to illustrate the analysis.
Cookies
Flagging SameSite=None setting 1
The first step is to check if Secure attribute is included. This ensures that the cookie is only sent over HTTPS.
Modern browsers generally require this and may reject the cookie otherwise. However, there are still cases where there is a misconfiguration with the missing attribute and the cookie doesn't work as intended so it is best to check what the requirements were.
Even with Secure, the risk factor depends on type of cookie and if the application is exposed to Cross-Site Request Forgery (CSRF) attack paths. Having context of the app's business functions would help judge the attack angle and severity to determine the vulnerability status.
Cookies missing the HttpOnly attribute 2
This attribute prevents client-side JavaScript from reading the cookie, which mitigates possible cookie theft by XSS attacks. Without it, JavaScript can access it via document.cookie.
While setting this flag is best practice for security, its absence does not always indicate an issue. Some applications intentionally allow JavaScript access for specific functionality.
Application purpose and requirements should be checked to justify the choice.
Broad cookie domain scopes
If unknown or untrusted subdomains are included, this is a finding.
If there is known cross-subdomain authorization included, this is informational.
The key is to classify the security boundaries as trusted or not to prove a finding.
CSP
unsafe-inline
Content-Security-Policy: script-src 'self' 'unsafe-inline'
Treat as a risk indicator: allowing inline script weakens CSP’s ability to mitigate XSS because many injection primitives can execute without needing an external script source.
Not automatically a “vulnerability,” but it reduces blast-radius controls—severity should increase if the app has any realistic injection surface (templating, user content, legacy DOM sinks).
What to verify:
Is inline script actually required (legacy frameworks, inline event handlers)?
Are there exceptions via nonce-... / hashes that could replace unsafe-inline?
Typical remediation:
Remove unsafe-inline and move to nonce-based or hash-based CSP for scripts; prefer avoiding inline event handlers.
Consider strict-dynamic (where appropriate) to tighten dynamic script loading.
wildcard domains
Content-Security-Policy: script-src https://*.example.com
This expands trust to every subdomain, which often exceeds what’s intended (especially when subdomains are operated by different teams, hosted on third-party platforms, or prone to takeover).
Treat as a finding when:
Subdomains are not uniformly controlled/hardened, or you can’t confidently assert “all subdomains are trusted,” or
The app loads scripts from subdomains that could host attacker-controlled content (e.g., user-generated content, marketing tools, legacy apps).
Often informational / acceptable when:
The org has strict ownership controls, no third-party hosted subdomains, strong provisioning/deprovisioning, and you can enumerate/justify the exact subdomains in use.
Typical remediation:
Replace wildcards with explicit allowlists of known script hosts.
Where feasible, combine with SRI (for static third-party scripts) and tighten related directives (default-src, object-src 'none', etc.).
Security Headers
Overly permissive policy
Permissions-Policy: geolocation=*, microphone=*
Such definition of the policy allows broad access to browser features. Activity would need to be validated against the actual application purpose in order to check safety. This would weaken security by exposing resources to a much wider scope.
Missing Referrer-Policy header
Referrer-Policy controls how much of the current page’s URL the browser includes in the Referer header when navigating to another site or loading third-party resources.
If it’s missing, browsers may send a full URL (including path and query string) as the referrer in some cases, which can create privacy leakage and unintentionally expose sensitive URL data to third parties (analytics, CDNs, external links).
This becomes a finding if the application places sensitive values in URLs (e.g., session identifiers, reset tokens, email addresses, internal IDs) or embeds third-party content on sensitive pages.
Typical remediation: set a safe default such as Referrer-Policy: strict-origin-when-cross-origin (common) or no-referrer (stricter), then validate it doesn’t break required flows.
Transport Security
HSTS not enabled on endpoints
Strict-Transport-Security header is missing
This example was mentioned earlier. There are other early entry points besides login so it is important to enforce this header early to avoid exposure to HTTPS-downgrade risks and helps prevent browsers from making accidental HTTP requests to the domain after the first secure visit.
Weak SSL/TLS configuration
This could mean outdated server supports, weak cipher suites, or insecure encryption protocols. This leaves data vulnerable to activity such as man-in-the-middle attacks or decryption.
False Positives
Some results may need manual verification in order to provide possible solutions. Scanners can make mistakes and flag problems that are not there or have no actions to take. This is one reason why reproducible proof is important.
Examples can include:
Weak TLS on a port that does not exist. There is no reachable service that reflects the concern as the port is closed and the service is not listening. However, the process of confirming the result provides a sense of relief and double checks the issue.
Cookie flagged with missing attributes that are present. The scanner didn't parse the cookie definition correctly.
Missing security header but is present. The scanner didn't parse the cookie definition correctly.
Guidance On Remediation
When addressing testing results, blanket changes across the entire project can be counterproductive. Specific remediation that accounts for the role and context of each component leads to better outcomes. Effective remediation relies on understanding all aspects of an issue - where it is, how the component is used, and what the scope of impact could be. Issues are fixed faster with guided criticism and analysis based in the real-world application.
Elements of good remediation include only necessary changes and explicit instructions on actions, while minding the context of the system and architecture.
Reproducible Proof
Non-intrusive findings still need evidence to support the result. A finding should show the actual pinpointed issue, with any sensitive values redacted, and explain what security vulnerability is present. This lets others reproduce the issue and understand its impact, allowing for a solution to be put in place faster.
If providing reproducible proof for a finding is difficult or unlikely, the results are more often false positives. To avoid such is not to simply disregard low results but to apply context as described in examples above. When results from non-intrusive testing are reviewed, analyzed, and reproduced, noise is eliminated and risk begins to reduce.
Final Thoughts
Having a defined taxonomy, evaluating context of the findings, and providing reproducible evidence can help turn noisy non-intrusive web findings into trusted outcomes. These results can help tighten the security posture of an organization through a culmination of small changes and being mindful about methods of potential attacks.
Footnotes
https://portswigger.net/web-security/csrf/bypassing-samesite-restrictions ↩
https://portswigger.net/kb/issues/00500600_cookie-without-httponly-flag-set ↩