Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New (13 March 2026)
IBM X-Force published their Slopoly report today. Simultaneously, Palo Alto’s Unit 42 released its 2026 Global Incident Response Report with a directly corroborating observation: ransomware actors are “using AI to reduce manual work during deployment.” Two independent research teams, same conclusion, same week. The signal is clear enough to act on.
Changelog
| Date | Summary |
|---|---|
| 13 Mar 2026 | Initial publication. |
In early 2026, IBM X-Force investigated a ransomware intrusion. The attackers got in via a ClickFix social engineering attack, deployed a NodeJS backdoor called NodeSnake, escalated to a more capable JavaScript RAT, and eventually dropped Interlock ransomware. Standard financially motivated playbook from a group IBM tracks as Hive0163.
But somewhere in the middle of that attack chain, they also deployed something nobody had seen before. A PowerShell-based command and control persistence client that X-Force named Slopoly. It maintained persistent access to the compromised server for more than a week. IBM assessed it as “likely AI-generated.” No existing antivirus or EDR signature matched it.
It wasn’t impressive malware. That’s the point.
What happened
Hive0163 is a financially motivated threat group whose model is straightforward: get inside a corporate network, exfiltrate as much data as possible, deploy ransomware, extort. IBM tracks several suspected relationships with developers behind Broomstick, SocksShell, PortStarter, and SystemBC, as well as ties to Rhysida ransomware operators. This is an experienced, well-resourced group with access to private malware frameworks and crypters.
The early-2026 intrusion started with ClickFix – a social engineering technique that manipulates users into pasting and executing a malicious PowerShell command via the Windows Run dialog. From there: NodeSnake for initial persistence, InterlockRAT for full C2 capability including SOCKS5 tunnelling and reverse shell access, then Slopoly as an additional persistence layer, then Interlock ransomware for the final payload.
Hive0163 has previously claimed attacks against DaVita (healthcare, nearly 2.7 million patients’ data exposed) and the Texas Tech University System (14 million patients). The group’s targets skew toward organisations with high-value data and tolerance for paying ransoms.
Slopoly was deployed late in the attack, after Hive0163 already had full access through InterlockRAT. IBM notes this pattern – using a novel tool alongside existing, proven backdoors – resembles a live-fire exercise. Testing AI-generated tooling in a real engagement while maintaining a fallback.
What Slopoly is (and isn’t)
Slopoly is a PowerShell script. It collects basic system information and sends it as JSON to a C2 server endpoint. It beacons every 30 seconds. It polls for commands every 50 seconds. Commands received from the server get executed via cmd.exe and the results sent back. It maintains a rolling log file. It establishes persistence via a scheduled task named “Runtime Broker.”
That’s it. IBM’s assessment: “mediocre at best.”
The code describes itself as a “Polymorphic C2 Persistence Client.” It isn’t polymorphic. It cannot modify its own code during execution. The variable naming is accurate and descriptive, the comments are extensive, the error handling is structured – all of which IBM flags as indicators of LLM authorship. Human-written malware doesn’t come with helpful inline documentation. The quality suggests a less advanced model, though IBM couldn’t determine which one. Importantly, the naming of variables indicates the model was deliberately instructed to produce malicious code – whatever safety guardrails were present, they were successfully circumvented.
What Slopoly lacks in sophistication, it made up for in novelty. No existing signature matched it. It ran for more than a week in a real production environment before being caught. For a persistence mechanism, that’s all you need.
Security coverage of AI-generated malware tends to focus on technical capability. Is it more advanced than what humans write? Does it use novel evasion techniques? Is the code quality better? These are the wrong questions. The relevant question is: did it work? Slopoly answers that clearly.
The attribution erosion problem
Modern threat intelligence depends on pattern matching. Code structure, control flow, variable naming conventions, API call sequences, specific technique choices – these are the fingerprints that link one piece of malware to a specific developer, and one campaign to a known threat actor. They’re how IBM links Slopoly’s operators to Rhysida. They’re how attribution confidence builds over time.
AI-generated code disrupts this model in a specific way.
IBM’s report puts it plainly: “Disparate, largely similar malicious C2 clients will become significantly more difficult to attribute to a single developer in the future, knowing that the effort needed to create it is just a fraction of what it used to be.”
Each AI-generated sample carries different structural characteristics even when the functionality is identical. The stylistic fingerprints that a human developer inevitably leaves – consistent variable naming patterns, preferred code structures, characteristic ways of handling errors – don’t carry through to LLM output the same way. You can generate ten functionally equivalent backdoors and get ten structurally distinct pieces of code.
This doesn’t just slow down detection. It slows down attribution. Attribution is what drives the coordinated response that disrupts threat actor infrastructure, triggers law enforcement action, and informs defensive posture across industries. An attack that delays attribution by weeks or months provides meaningful operational cover – not because the malware is sophisticated, but because the tooling pipeline is harder to fingerprint.
Slopoly may not have been designed with this goal in mind. It was probably built because it was fast to produce and novel enough to avoid detection. The attribution disruption is an emergent benefit.
Post-exploitation is where this fits
Slopoly wasn’t used for initial access. It was deployed after Hive0163 was already fully inside the network, with established C2 infrastructure through InterlockRAT.
This is the logical use case. Initial access still relies on proven tradecraft: ClickFix, credential theft, initial access brokers, known vulnerability exploitation. These techniques work, they’re refined, and there’s no reason to introduce an untested AI-generated tool at the point where discovery ends the entire operation.
But post-exploitation persistence is different. You’re already inside. You need custom tools that won’t match existing signatures – both to maintain access while the real work happens and to avoid tipping off detection that something is running. This is exactly where the cost/benefit of AI-generated tooling shifts in the attacker’s favour.
Generate a novel PowerShell C2 client. It doesn’t need to be sophisticated – it needs to not match any existing signature and do the basics reliably. An LLM produces that in minutes. The attacker deploys it alongside established tools as a redundant access mechanism. If it works, good. If EDR catches it, the other backdoors are still running.
IBM describes Hive0163’s use of Slopoly as possibly “live-fire exercise style” – real-world testing of AI-generated tooling with minimal operational risk because they had fallback access. That framing suggests the group is actively developing their AI malware capability, not just experimenting. The question for defenders is how long before this approach is refined and this pattern becomes standard practice.
The development economics argument
Writing custom malware used to require skilled malware authors. That’s expensive and rare. Even for well-resourced groups like Hive0163, custom tooling is a constraint – you have a finite number of people who can write a functional backdoor from scratch, and each tool represents significant investment to build, test, and maintain.
AI doesn’t make everyone a malware author. Initial access, operational security, network traversal, data exfiltration at scale – all of this still requires substantial expertise. But it changes the economics of one specific part of the operation: producing novel tooling that evades detection.
IBM’s broader tracking of AI use in malware development since late 2023 identifies three primary uses: scaffolding (generating code structure quickly), obfuscation, and translation between languages and platforms. None of these require state-of-the-art models. They require reliable code generation that can be directed with a prompt and lightly modified for deployment.
For groups that already have operational capability, AI reduces the custom-tooling cost without replacing the operational skill. Hive0163 didn’t need an AI-generated backdoor that was technically impressive. They needed one that was novel enough to persist undetected while the rest of the operation proceeded. The bar is low. The tools are widely accessible.
IBM is explicit that this is a numbers game. More custom tooling, generated faster, with lower development overhead. Defenders write a signature for one variant; the attacker generates another. The asymmetry was already uncomfortable. AI makes it worse.
What defenders need to do
The immediate implication is straightforward: signature-based detection is systematically degraded by AI-generated malware variety. A tool generated for a single attack, never seen before, by definition has no signature. Chasing signatures for AI-generated malware is a losing approach – there will always be more variants than signatures.
IBM’s recommendation is direct: prioritise behaviour-based detection over signature-based or malware-specific mechanisms. What does the process do, not what does the binary look like.
For Slopoly specifically – and for AI-generated PowerShell C2 clients in general – the behavioural indicators are observable:
PowerShell execution patterns. Slopoly runs as a scheduled task named “Runtime Broker” in C:\ProgramData\Microsoft\Windows\Runtime\. PowerShell processes spawned by scheduled tasks that are making regular outbound HTTP connections are detectable regardless of signature. Behavioural detection and alerting on PowerShell execution patterns matters more than identifying the specific script.
Network egress monitoring. Slopoly beacons to its C2 server every 30 seconds and polls for commands every 50 seconds. Regular, patterned outbound connections from a server to an external host – especially to a new or low-reputation domain – is detectable behaviourally. The C2 domain plurfestivalgalaxy[.]com (94.156.181[.]89) would have generated anomalous egress traffic. Egress monitoring that flags new external connections from internal servers catches this class of behaviour.
ClickFix prevention. The initial access vector for this intrusion was ClickFix. IBM recommends disabling the Win+R shortcut or monitoring the RunMRU registry key. This is low-cost, high-impact hardening that stops a social engineering technique that Hive0163 and multiple other groups rely on.
EDR investment. EDR and XDR tools with strong behavioural detection capabilities – process behaviour, network connections, registry modifications, file writes – are better positioned than signature-dependent tools. The threat is AI-enhanced malware generation, which means novel binaries and scripts will appear faster than signatures can be written. The defensive response is capability that doesn’t depend on prior observation of the specific malware.
IBM’s IoCs for the Hive0163 infrastructure are published in their report and worth hunting for in your environment.
The question that matters
The early discourse around AI-generated malware spent too much time asking whether it was technically impressive. It usually isn’t. Slopoly isn’t. Its own inline documentation claims capabilities it doesn’t have.
That framing is wrong. As IBM and Unit 42 both note: attackers use whatever works. Sophistication is a means, not an end. A PowerShell script that beacons every 30 seconds and polls for commands doesn’t need to be architecturally elegant. It needs to maintain persistence while the operator exfiltrates data and stages ransomware. Slopoly did that for more than a week against a real target.
The broader pattern with AI as an offensive tool is consistent: AI reduces the cost of producing novel, functional tooling. It doesn’t require the attacker to be more capable. It requires the defender to stop relying on signatures.
The question was never whether AI-generated malware is technically impressive. It’s whether it works well enough to achieve the objective while evading detection. Slopoly answers that question. The next one will too.