Commissioned, Curated and Published by Russ. Researched and written with AI.


What’s New

Two major incidents broke this week with a shared architecture. ShinyHunters confirmed using GCP credentials harvested from the 2025 Salesloft Drift breach to access Telus Digital’s Google Cloud environment, extract a claimed petabyte of data, and demand $65M in extortion. Separately, the AppsFlyer Web SDK was confirmed to have served malicious crypto-stealing JavaScript between March 9-11, 2026 – a domain registrar incident that briefly made websdk.appsflyer.com a live malware delivery channel for every site running the SDK. Both incidents confirmed this week; both are still developing.


Changelog

DateSummary
15 Mar 2026Initial publication covering the Drift → Telus chain and the AppsFlyer SDK supply chain poisoning.

Two incidents this week. Different targets, different attack methods, different sectors. The same underlying playbook.

Breach a trusted third-party tool. Mine what’s inside it. Use what you find to get somewhere bigger.

That’s it. That’s the pattern. And it’s happening at scale, systematically, by the same threat actor operating both sides of this week’s news.

The Drift → Telus Chain

Telus Digital is the business process outsourcing arm of Canadian telecoms provider Telus. It provides customer support, content moderation, billing operations, fraud detection, and AI-powered customer service tooling to companies worldwide. That description matters: when a BPO provider gets breached, the breach doesn’t stop at the BPO’s own data. It extends to every client whose operations run through them.

On March 12, 2026, Telus Digital confirmed it had suffered a security incident involving “unauthorized access to a limited number of our systems.” That phrasing is doing a lot of work. ShinyHunters, the threat group behind the breach, claims to have exfiltrated close to one petabyte of data and began extorting the company in February – demanding $65 million. Telus did not engage.

The entry point was not a zero-day. It was not a sophisticated novel attack. It was credentials sitting in someone else’s stolen dataset.

In August 2025, ShinyHunters exploited compromised OAuth tokens from Salesloft’s Drift integration to access Salesforce customer instances across 760 companies, downloading support ticket data, customer records, and internal data at scale. Mandiant and Google Cloud’s threat intelligence group documented it at the time. The total claimed dataset: 1.5 billion Salesforce records.

Inside that data were Google Cloud Platform credentials belonging to Telus Digital. Someone had pasted them somewhere they shouldn’t have – a support ticket, a Salesforce note, a customer success interaction. It happens constantly. Engineers open a support case and include the connection string that was throwing errors. Customer success teams log auth tokens to debug integrations. These things flow into CRMs and support systems every day across every company, and almost nobody audits what’s in there.

ShinyHunters used those GCP credentials to access Telus’s environment, found a large BigQuery instance, and started pulling. After downloading the initial dataset, they ran TruffleHog – an open-source credential scanning tool – across the stolen data to find additional secrets. Those credentials enabled lateral movement into further Telus systems. The extortion demand came weeks later.

Telus Digital has confirmed the incident and engaged forensic experts. The list of affected downstream clients – 28 companies named by ShinyHunters, not disclosed publicly – remains unconfirmed. Given that Telus Digital handles customer support and call center outsourcing for enterprises, the actual blast radius could be substantial.

The AppsFlyer SDK Poisoning

On March 9, 2026 at 22:45 UTC, something changed on websdk.appsflyer.com.

AppsFlyer is one of the dominant mobile measurement partner (MMP) platforms – it handles marketing attribution and in-app event tracking for 15,000 businesses across more than 100,000 mobile and web applications. When a site loads the AppsFlyer Web SDK, it’s loading JavaScript directly from AppsFlyer’s servers. Every site that does this trusts that the code served is the code AppsFlyer wrote.

On March 9, it wasn’t.

Researchers at Profero discovered malicious JavaScript being delivered through the SDK from its official domain. The payload was designed to look like normal SDK code. It preserved the SDK’s standard functionality. In the background, it decoded obfuscated strings at runtime, hooked into browser network requests, and watched for cryptocurrency wallet addresses being entered anywhere on the page. When it found one, it replaced the address with the attacker’s wallet and exfiltrated the original address and metadata. Bitcoin, Ethereum, Solana, Ripple, TRON – all targeted.

The exposure window ran from March 9 to approximately March 11. Any user who entered a crypto wallet address on a site running the AppsFlyer Web SDK during that window may have sent funds to an attacker.

AppsFlyer confirmed the incident as a “domain registrar incident” – the domain was temporarily controlled by an attacker who served unauthorized code through it. Their mobile SDK was not affected. They’ve stated customer data on AppsFlyer’s own systems was not accessed. The investigation is ongoing.

ShinyHunters separately claimed responsibility for a breach at Match Group – Hinge, Tinder, OkCupid, and Match.com – affecting over 10 million records, attributed to the AppsFlyer SDK compromise. Match Group has not confirmed this.

The Cascade Problem

These two incidents feel different on the surface. One is credential theft via a CRM breach chain. The other is SDK supply chain poisoning. But the structure is the same: a trusted intermediary becomes the attack vector.

What’s changed is the methodology on the credential side. ShinyHunters isn’t just stealing data – they’re industrialising the harvest. Running TruffleHog across stolen datasets is systematic credential extraction at scale. Every breach in their archive is now a potential pivot into a new company. The Drift dataset was a credential store they didn’t fully exploit until six months later, when they found Telus’s GCP keys buried in it.

We don’t know which other companies had credentials in that same 1.5 billion record Salesforce dataset. We probably won’t know until those companies confirm breaches. The lag between “breach occurred” and “affected company discovers it” can be months – or longer if the attacker is patient and the entry point was indirect.

This is the cascade problem. It’s not one breach. It’s a breach pipeline, where each compromise becomes raw material for the next.

The same dynamic applies to the SDK poisoning vector, though the mechanism is different. AppsFlyer’s reach (100,000+ apps) means a single point of compromise delivers malicious code to every site in the ecosystem simultaneously. You didn’t need to breach Match Group directly to reach Match Group’s users. You needed to breach the infrastructure that Match Group trusts.

This is the same pattern as the Codecov breach in 2021, the same pattern as malicious packages in npm supply chains, and the same pattern as third-party processor compromises that never make headlines until someone downstream starts notifying customers. The distribution channel becomes the weapon.

83% of identity-origin breaches now follow a credential-first pattern – and supply chain compromises are one of the most efficient ways to harvest those credentials at volume.

Your SaaS Tooling Is a Credential Dump

The Telus breach started with credentials in a support ticket. That’s not an unusual place for credentials to end up.

Engineers open support cases when things are broken. Things are often broken because of authentication failures, misconfigured connections, or permissions errors. The natural thing to include in a support ticket is the thing that’s failing – which is often an API key, a connection string, or an auth token. It goes into Drift, or Zendesk, or Salesforce Service Cloud, or whatever CRM the vendor is running. It stays there. Nobody goes back and redacts it. Nobody audits the ticket history for secrets. The SaaS vendor gets breached six months later, and suddenly those credentials are in ShinyHunters’ dataset.

This isn’t a failure of individual engineers. It’s a failure of default assumptions about what support tooling is. CRMs and customer success platforms are designed for relationship data – they’re not designed with credential handling in mind, they don’t scan for secrets, and they don’t have the same security posture as a secrets manager or a CI/CD pipeline. Treating them as equivalent to those systems – in terms of what’s safe to put in them – is the mistake.

The same logic applies to Slack (external shared channels), Jira (where vendor tickets go), and any other SaaS tool that connects your internal systems to third-party services. Each of those tools is a potential credential surface. The question is whether you’ve audited what’s flowing into them.

SDK Supply Chain Risk and SRI

The AppsFlyer incident has a technical mitigation that’s rarely deployed: Subresource Integrity (SRI) hashing.

SRI is a browser security feature that lets you specify a cryptographic hash of an external script in the <script> tag. If the content at that URL doesn’t match the hash, the browser refuses to load it. For a JavaScript SDK loaded from a third-party CDN, SRI would have blocked the malicious payload the moment it changed – because the hash of the file would no longer match the hash in the tag.

The reason SRI isn’t deployed against SDKs like AppsFlyer is that it’s fundamentally incompatible with how those SDKs are meant to work. If the SDK vendor ships updates – new features, bug fixes, performance improvements – the hash changes and every site pinning the old hash breaks. Vendors encourage loading from the canonical URL to get automatic updates. SRI requires you to consciously pin to a version and update that pin when you want to update.

That’s the tradeoff: automatic updates versus tamper detection. Most teams choose convenience. After events like this one, some will reconsider.

Even without SRI, monitoring for unexpected changes to third-party script hashes is achievable through Content Security Policy reporting, JavaScript integrity monitoring, or periodic automated comparisons against known-good hashes. None of those are standard practice either.

The more fundamental issue is that the “just load our SDK from our CDN” model distributes enormous implicit trust to the SDK vendor’s entire infrastructure stack – including their domain registrar. The AppsFlyer incident wasn’t a compromise of their application servers. It was a compromise of their domain registrar, which let an attacker point websdk.appsflyer.com wherever they wanted. That’s a trust surface most engineering teams have never audited.

Practical Mitigations

Six concrete actions for engineering and security teams:

1. Audit what flows into your support tooling. Pull a sample of recent tickets from your CRM, Zendesk, Salesforce, or equivalent. Look for credentials, tokens, connection strings, API keys. If they’re there, they’ll be there again. Define what’s not allowed in support tickets and enforce it – ideally with tooling that scans for secrets before submission.

2. Rotate credentials that have touched any breached SaaS. When a SaaS vendor you use confirms a breach, assume any credentials that ever appeared in that system are compromised. Don’t wait for investigation conclusions. Rotate at breach notification time, not breach investigation completion time.

3. Inventory your third-party JS SDKs. Know what’s running on your sites, where it loads from, and what it has access to. A marketing analytics SDK that loads from a third-party CDN has the same JavaScript execution context as your own code – including access to form inputs, session tokens, and wallet addresses if any are present.

4. Evaluate SRI or hash monitoring for high-risk SDKs. For any SDK loaded from an external CDN, assess whether SRI pinning is feasible, or implement automated hash monitoring to detect unexpected changes. Prioritise SDKs on pages that handle payments, crypto transactions, or authentication.

5. Review your BPO and third-party processor exposure. If you use any BPO provider for customer support, billing, or content moderation, understand what data flows to them and what access they have to your systems. The Telus Digital situation demonstrates that downstream exposure from a BPO breach can be substantial even if your own systems are never directly touched.

6. Apply TruffleHog logic to your own breach response. When responding to a breach, scan the compromised dataset for credentials the way an attacker would. If ShinyHunters is running TruffleHog across stolen data to find the next entry point, your incident response team should be doing the same – identifying what credentials were exposed before the attacker can use them.

The Trust Boundary Is the Perimeter Now

The network perimeter as a security boundary has been dead for a while. Zero-trust architecture acknowledges that implicitly. What this week’s incidents make concrete is that the same logic extends to the software and services you trust by default.

Your attack surface isn’t your network boundary. It’s the trust boundary of every SaaS tool, every third-party SDK, and every BPO provider in your stack. A breach anywhere in that chain is a potential credential source for an attacker who knows how to look.

ShinyHunters is demonstrating that systematically and at scale. The Drift breach happened in August 2025. Telus Digital confirmed the downstream impact in March 2026. The pipeline between those two events was six months of credential mining, patient access, and lateral movement.

The question for every engineering team isn’t whether your systems are secure. It’s whether the systems you trust are secure – and what you’re exposed to when they’re not.