Why Secrets in JavaScript Bundles Are Still Being Missed
Leaked API keys are now a routine incident type for SOC teams, and so are the downstream breaches they enable. The obvious question is why these sensitive tokens are still so frequently exposed in the first place.
To understand the root cause, Intruder’s research team evaluated what conventional vulnerability and infrastructure scanners actually inspect, then designed a new secrets detection technique to close the gaps we see in current tooling.
Running this approach at scale across 5 million applications surfaced over 42,000 exposed tokens spanning 334 secret types, highlighting a substantial class of leaked secrets that today’s scanners routinely miss, especially in single-page applications (SPAs) and their JavaScript bundles.
In this article, we walk through existing secrets detection techniques, their blind spots from a SOC perspective, and the results of scanning millions of public applications for secrets hidden inside JavaScript bundles.
Established secrets detection methods (and their limitations)
Traditional secrets detection
The classic, fully automated way to detect application secrets is to probe a fixed set of common paths and run regular expressions over responses to match known secret formats.
This can be useful and will catch some straightforward leaks, but it has obvious limitations and will miss many classes of exposure, especially those that depend on spidering the application or authenticating as a user.
A good example of this is Nuclei’s GitLab personal access token template. The scanner is provided a base URL, for instance, https://portal.intruder.io/, and the template then instructs it to:
- Issue an HTTP GET request to https://portal.intruder.io/
- Inspect only the direct response to that single request, ignoring any additional pages or resources such as JavaScript files
- Search that response for a GitLab personal access token pattern
- If a candidate is found, send a follow-up request to GitLab’s public API to validate whether the token is active
- If the token is live, flag an issue
While this is a relatively simple workflow, it is still effective, particularly when templates enumerate many high-likelihood paths where secrets are commonly exposed.
This pattern is typical of infrastructure-oriented scanners that generally do not execute a headless browser. When they are given only a base URL to assess (for example, https://portal.intruder.io), they do not automatically follow the additional requests a browser would make (such as retrieving JavaScript assets required to render the page, e.g., https://portal.intruder.io/assets/index-DzChsIZu.js) when using this older style of scanning.
Dynamic Application Security Testing (DAST)
Dynamic Application Security Testing (DAST) tools generally provide a more capable way to scan applications. They usually support full spidering of the site, authentication workflows, and deeper analysis of application-layer weaknesses. On paper, DAST scanners look like a natural fit for secrets discovery in front-end code. In principle, nothing stops a DAST scanner from enumerating JavaScript assets and scanning them for secrets.
In practice, this kind of scanning is resource-intensive, requires careful tuning and maintenance, and is often reserved for a small set of business-critical applications. As a result, you are unlikely to see every internet-facing app in a large estate onboarded into DAST. On top of that, many DAST tools ship with a limited library of regular expressions compared with popular command-line secrets scanners.
That leaves a clear coverage gap which should arguably be filled by traditional infrastructure scanners, but currently is not—and, in reality, is also not reliably covered by DAST due to deployment scope, cost, and operational overhead.
Static Application Security Testing (SAST)
Static Application Security Testing (SAST) tools scan source code for vulnerabilities and are a primary control for catching secrets before they ever make it into a build. They are effective at detecting hardcoded credentials and can stop entire categories of leaks from reaching production.
However, our findings show that SAST also does not give full coverage. Some secrets that ended up inside JavaScript bundles were completely outside what static analysis would typically catch, and they slid past pre-commit and CI checks into deployed artifacts.
Building a secrets detection check for JavaScript bundles
When this research started, it was not obvious how frequent this issue actually was. Are secrets routinely ending up in JavaScript front-ends, and is the problem widespread enough that SOC and appsec teams need automated detection at scale?
To answer this, we built an automated check and scanned roughly 5 million applications. The result set was much larger than expected: the raw output was over 100MB of plain text and contained more than 42,000 tokens across 334 distinct secret types.
We did not fully triage every finding as a SOC investigation, but in the subset we manually reviewed, we saw multiple high-impact exposures that would warrant immediate incident response and credential rotation.
What we found
Code Repository Tokens
The highest-risk leaks we observed were tokens for source code hosting platforms such as GitHub and GitLab. Across the scan set, we identified 688 such tokens, many of which were still valid and provided full repository access.
In one representative case shown below, a GitLab personal access token was hardcoded directly into a JavaScript file. The token scope granted access to all private repositories for that organization, including CI/CD pipeline secrets and credentials for downstream services such as AWS and SSH—effectively a pivot point to a much larger compromise.
Project Management API Keys
Another impactful pattern involved API keys for project management platforms, such as Linear, embedded directly in front-end bundles:
This particular token granted access to the organization’s entire Linear workspace, including internal tickets, project details, and references to downstream services and SaaS environments—data that would be extremely useful for intrusion planning, phishing, and lateral movement.
And more
We also found exposed secrets tied to a broad set of other services, including:
CAD software APIs – access to user information, project metadata, and architectural designs, including sensitive facilities such as a hospital
Link shorteners – permissions to create and enumerate short links, useful for tracking and phishing infrastructure
Email platforms – access to mailing lists, campaign content, and subscriber details
Webhooks for chat and automation platforms – 213 Slack, 2 Microsoft Teams, 1 Discord, and 98 Zapier webhooks, all verified as active at the time of discovery
PDF converters – access to third-party document generation services that may process internal or customer data
Sales intelligence and analytics platforms – access to aggregated company and contact data that can support targeted social engineering
Don’t ship your secrets
Shift-left controls are still essential. SAST, repository-level secret scanning, and IDE plugins do remove real risk and prevent many secrets from ever reaching production artifacts.
But as this research shows, they do not cover every route a secret can take into a deployed SPA. Secrets introduced or injected during build, deployment, or runtime configuration can bypass those early checks and end up shipped in front-end bundles, long after shift-left tooling has finished its job.
From a SOC and appsec operations viewpoint, this problem will likely increase as pipelines become more automated and more code is generated or transformed automatically.
To close this gap, single-page application spidering and JavaScript bundle inspection need to be part of your external attack surface monitoring. Detecting secrets in live front-ends before attackers do allows security teams to rotate credentials, update access scopes, and tune pipelines to prevent repeat exposure. We’ve built automated SPA-focused secrets detection into Intruder to help teams operationalize this control. Learn more.
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.
Reference: View article


