Commissioned, Curated and Published by Russ. Researched and written with AI.
Michael Smith, a musician from North Carolina, pleaded guilty on March 19-20 2026 to one count of conspiracy to commit wire fraud. He agreed to forfeit $8,091,843.64. Maximum sentence: five years.
The scheme ran from 2017 to 2024. Smith generated hundreds of thousands of tracks using AI music tools, uploaded them to Spotify, Apple Music, Amazon Music, and YouTube Music, then drove synthetic streams through thousands of automated bot accounts spread across the platforms to evade detection. Billions of streams. $10M+ in claimed royalties.
This is the first US criminal prosecution for AI-assisted streaming fraud.
The Engineering Problem, Not the Music Story
The music angle is a distraction. The interesting thing here is the detection failure.
Smith ran this for seven years. Not seven months – seven years. Across four major platforms simultaneously. The platforms have financial incentive, engineering resources, and direct access to streaming telemetry. And yet.
The royalty payout model creates the attack surface. Streaming royalties are distributed from a shared pool proportional to play counts. Every fake stream Smith collected didn’t just steal from the pool – it diluted every legitimate stream in it. When a bot plays a track, it takes a fractional cent away from every real artist on the platform. At billions of synthetic streams, the dilution is material.
The fraud is structurally similar to ad click fraud, and ad fraud detection has been a hard problem for twenty years with dedicated teams, ML pipelines, and still-imperfect results. Streaming platforms arguably had less incentive to solve it – fake streams inflate engagement metrics and make the platform look healthy. The cost is borne by artists, not the platform directly.
What Changed in 2024?
Smith started in 2017. The AI music tools available then were primitive compared to Suno, Udio, or anything running today. He was working with whatever generation tooling existed before the current wave. The fact that the scheme scaled to hundreds of thousands of tracks anyway is the tell: content generation was never the bottleneck. Distribution and stream generation were the bottleneck, and those are solved problems.
With 2026-era tools, the content side is trivially easy. Anyone with a Suno subscription can generate thousands of tracks. The bot infrastructure to drive streams is commodity. Smith’s scheme required real engineering effort in 2017. Today it’s a weekend project.
The legal precedent matters here. Smith was charged under wire fraud statutes – not any AI-specific law. That’s the pattern. When the law hasn’t caught up to new technology, prosecutors reach for the closest existing statute that fits the conduct. Wire fraud fits: there’s a scheme to defraud, and the internet was used to execute it. The AI component doesn’t need a specific law to prosecute.
What the Platforms Should Have Caught
The signals that should be detectable in streaming fraud at scale:
Account clustering. Thousands of bot accounts streaming the same catalogue. Account creation patterns, IP clustering, device fingerprinting – standard fraud signals. The accounts “spread across platforms to evade detection” per the indictment, which suggests platform-level signals were eventually triggered, just not for seven years.
Streaming behaviour. Real listeners have session patterns. They skip. They repeat tracks they like. They listen at times correlated with their timezone. Bot streams are regular, complete, and uncorrelated with human behaviour. At scale, the statistical fingerprint is visible.
Catalogue composition. Hundreds of thousands of tracks uploaded by the same distributor account with no promotional activity, no social presence, no chart performance anywhere – but consistent streaming numbers. The signal-to-noise ratio on a legitimate artist’s catalogue looks different.
None of these are easy at the scale platforms operate. Spotify has 100 million+ tracks. But the fact that a single scheme ran undetected for seven years across four platforms suggests either the detection systems weren’t looking, weren’t sharing signals across platforms, or the incentives to find it weren’t strong enough.
The Royalty Pool Problem
The real damage isn’t to the platforms. Spotify and Apple Music paid out royalties they shouldn’t have – that’s a financial loss. The deeper problem is that every legitimate artist on those platforms received less than they should have, because the pool was diluted by synthetic streams.
There’s no mechanism to claw back those lost royalties to real artists. The forfeiture goes to the government. Artists who were effectively robbed – fractions of a cent per stream, millions of streams, across seven years – get nothing from the prosecution.
As AI content generation gets cheaper, this problem scales. The platforms need detection systems that work. Not reactive prosecution after seven years – proactive identification. The royalty pool is a commons, and commons are vulnerable to exactly this kind of extraction.
Smith’s plea is the first domino. It won’t be the last case. The question is whether the platforms treat this as a one-off or as evidence that their fraud detection has a systematic gap worth closing.
The tools to run this scheme in 2026 are free. The barrier isn’t technical anymore.