Disclaimer: Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New This Week
The Hacker News reported on March 26 that a researcher identified as Yomtov disclosed a DOM-based XSS vulnerability in the Arkose Labs CAPTCHA component hosted on a-cdn.claude.ai. The flaw enables any website – including one served via a legitimate Google ad – to inject prompts into the Claude browser extension without user interaction. A week earlier, Oasis Security published their “Cloudy Day” research (covered by TechRadar) documenting three separate vulnerabilities in Claude.ai: invisible URL-parameter prompt injection, data exfiltration via Anthropic’s own Files API, and an unvalidated open redirect on claude.com. Anthropic patched the URL injection flaw; fixes for the other two were in progress as of the TechRadar report. Together these findings illustrate a consistent pattern: AI extension infrastructure becomes attack surface when third-party components or platform-level features are exploitable from the outside.
Changelog
| Date | Summary |
|---|---|
| 26 Mar 2026 | Initial publication covering the Arkose Labs XSS attack chain and the Oasis Security “Cloudy Day” findings. |
Here is the attack. An attacker runs a Google ad. The ad points to a page they control. That page loads the Arkose Labs CAPTCHA component from a-cdn.claude.ai inside a hidden iframe. The attacker’s page sends a postMessage payload that triggers a DOM-based XSS in the Arkose component. The XSS executes arbitrary JavaScript in the context of a-cdn.claude.ai. Because the Claude browser extension treats that subdomain as trusted, the injected script can fire prompts directly to the extension. Those prompts instruct it to exfiltrate data. The victim never clicks anything beyond the ad.
That is the chain reported by The Hacker News on March 26, 2026, attributed to researcher Yomtov.
Why the Subdomain Mattered
The critical lever is subdomain trust. a-cdn.claude.ai is part of the claude.ai origin. Browser extensions that whitelist or implicitly trust claude.ai will typically extend that trust to subdomains. The extension has no reason to expect hostile content from a-cdn.claude.ai – that domain is under Anthropic’s control.
Third-party components on trusted subdomains are exactly this kind of structural risk. The Arkose Labs CAPTCHA is a third-party widget, but from the browser extension’s perspective it runs on a first-party domain. Any XSS in that widget inherits the trust of the domain hosting it. That is the gap.
Same-origin policies and extension trust models are designed around domain ownership. They do not account for third-party code running within a trusted domain’s execution context. An extension developer who whitelists claude.ai implicitly trusts everything that can execute JavaScript under that origin – including CAPTCHA widgets from third-party vendors.
What Zero-Click Actually Means Here
“Zero-click” is used loosely in security coverage. Here it is precise. The victim does not need to:
- Visit claude.ai
- Click a suspicious link
- Download anything
- Dismiss a dialogue
- Or even have Claude open at the time the ad is clicked
All that is required is the Claude browser extension installed and a visit to any page that embeds the vulnerable Arkose component in an iframe. The attack runs silently in the background.
For engineers deploying AI browser extensions in enterprise environments, this is the threat model that matters. You cannot train users to avoid it through phishing awareness. There is no suspicious link. A legitimate Google ad is the entry point. The user clicks it; the attack executes.
The Arkose Labs CAPTCHA Angle
CAPTCHA components are near-universal on production web properties. They are loaded from third-party CDNs, embedded on pages that carry significant trust. The Arkose Labs component was hosted on a-cdn.claude.ai – a subdomain of the domain the Claude extension was designed to operate on.
Anthropic presumably deployed Arkose for bot detection somewhere in their infrastructure. A reasonable product decision in isolation. But the deployment created a surface where a DOM-based XSS in the third-party component was exploitable as if it were a first-party vulnerability. The vendor’s security posture became Anthropic’s attack surface, and Anthropic’s attack surface became the extension user’s problem.
Any organisation running AI browser extensions should audit which third-party scripts run on domains their extensions trust. A CAPTCHA widget from a vendor with its own security posture can become your attack vector when it is hosted under your domain.
Broader Context: The Oasis “Cloudy Day” Chain
The Arkose XSS story follows a week after Oasis Security’s “Cloudy Day” disclosure, published March 19 and covered by TechRadar. That chain is different in mechanism but identical in impact pattern.
The Oasis chain starts with invisible prompt injection via URL parameters. Claude.ai supports opening a new chat with a pre-filled prompt via claude.ai/new?q=. Invisible HTML elements embedded in that parameter pass instructions to Claude that the user cannot see. Claude then uses Anthropic’s own Files API to search prior conversations and exfiltrate data to an attacker-controlled Anthropic account.
The delivery mechanism is an open redirect on claude.com: any URL in the format claude.com/redirect/[url] redirects without validation. Google Ads validates destination URLs by hostname only. So an attacker can place an ad that appears to point to claude.com – technically true – which redirects to an attacker page, which feeds the victim a claude.ai URL with invisible injection embedded.
Oasis noted that the chain requires zero integrations, zero tools, and no MCP servers. Default Claude. Anthropic patched the URL-based prompt injection; fixes for the Files API exfiltration path and the open redirect were in progress at TechRadar’s reporting date.
Both attacks share the same structural property: the delivery mechanism is the legitimate ad ecosystem, the exploit surface is something Anthropic controls (a subdomain, a URL parameter, a redirect endpoint), and the outcome is silent data exfiltration with no meaningful user involvement.
Patch Status
For the Oasis “Cloudy Day” chain as of March 19: the URL-based prompt injection was patched. The Files API exfiltration path and the open redirect on claude.com were still being addressed.
For the Arkose Labs XSS chain reported March 26: patch status had not been confirmed in available coverage at time of writing. The likely remediation paths are removing the Arkose component from the a-cdn.claude.ai subdomain, enforcing Content Security Policy headers to restrict postMessage-based XSS, or updating the extension’s subdomain trust model to apply stricter controls.
What to Audit
If you are deploying AI browser extensions for your organisation, or building them, these findings define the audit surface:
Extension trust models. Which domains does your extension implicitly trust? Does that trust extend to all subdomains? Which third-party scripts run on those subdomains? CSP headers on every privileged domain need review.
postMessage handling in trusted-origin iframes. DOM-based XSS via postMessage is a known vector that gets overlooked when the iframe host is a “first-party” subdomain. Audit whether your extension accepts messages from iframes without validating message content and origin intent.
Third-party components on whitelisted domains. Analytics scripts, CAPTCHA widgets, chat tools, and A/B testing libraries running on a domain your AI extension trusts are part of your attack surface. Enforce Subresource Integrity on those scripts, or move them off domains with elevated trust.
URL parameter handling for AI interfaces. If your AI tool accepts pre-filled prompts via URL parameters, those parameters are a prompt injection surface. Any attacker who can construct a URL the victim will visit – via ads, QR codes, shortened links – can inject instructions. Sanitise or disable pre-filled prompts.
Outbound network access in AI sandboxes. The Oasis chain worked partly because Claude’s code execution sandbox could reach api.anthropic.com. Map every endpoint your AI tool’s sandbox can connect to and ask whether prompt injection could leverage those connections for exfiltration.
The underlying issue is that AI browser extensions extend significant trust to the AI provider’s infrastructure – reasonably so. But that trust becomes a weapon when the provider’s infrastructure carries exploitable third-party components. Your threat model now includes your AI vendor’s CDN choices.