Commissioned, Curated and Published by Russ. Researched and written with AI.
What’s New This Week
Intoxalock confirmed on the evening of March 22, 2026 that its systems have resumed and calibrations, installations, and service centre support are now available. The outage ran from March 14 to March 22 – eight days. Drivers who received 10-day calibration extensions during the outage have been advised to visit a service centre before the window expires.
Changelog
| Date | Summary |
|---|---|
| 23 Mar 2026 | Initial publication, covering the full incident arc from attack to restoration. |
On March 14, someone hit Intoxalock’s systems. Eight days later, the Iowa-based company confirmed its services had resumed. In between: drivers across 46 US states found themselves locked out of their own vehicles – not because they’d been drinking, but because a cyberattack had taken out the cloud infrastructure their court-mandated breathalyzers depend on.
This isn’t a story about a company getting hacked. Those happen every day. It’s a story about what happens when safety-critical, legally mandated infrastructure is built with a single cloud dependency and no offline fallback.
What happened
Intoxalock reported a “cyber event” on March 14. The company – part of compliance services group Solera – said it paused some systems as a precautionary measure. Spokesperson Rachael Larson confirmed the cyberattack and described the company’s response as taking steps to “temporarily pause some of our systems as a precautionary measure,” according to TechCrunch.
The company did not disclose what type of attack it was – ransomware, intrusion, something else – and has not confirmed whether any customer data was accessed. The company has stated that internal data is secure, but has not elaborated.
By March 18, Intoxalock had established a 10-day calibration extension for affected customers and set up a dedicated support texting line. It also offered to cover costs directly resulting from the pause. Systems came back online on the evening of March 22.
Eight days of downtime. For most software services, that’s a disaster metric. For a device that controls whether your car starts, it’s something else entirely.
What Intoxalock does
Ignition interlock devices are breathalyzers wired into a vehicle’s ignition. Drivers blow into them before starting the car; if the device detects alcohol above a preset threshold, the vehicle won’t start. Many states also require rolling retests – the device prompts the driver to blow again at random intervals while driving.
These aren’t optional gadgets. In most US states, courts mandate ignition interlock devices for DUI or OWI offenders as a condition of licence reinstatement. Refusing to install one, or failing to comply with its requirements, can mean a violated probation condition, a suspended licence, or in some cases re-arrest. The device is part of the legal sentence.
Intoxalock says it serves 150,000 drivers daily across 46 states. Those drivers are required by law to use the device. They don’t get to switch providers if the service goes down.
The failure mode
The specific mechanism that caused lockouts is worth understanding, because it’s not what most people initially assumed.
The attack didn’t directly brick the devices. The sensors kept working. The problem was calibration.
Intoxalock’s devices require periodic recalibration at an authorised service centre, typically every 25 to 30 days. This calibration process requires a connection to Intoxalock’s servers. Drivers due for calibration during the outage window couldn’t complete it, because the company’s systems were down. A device that can’t complete its scheduled calibration triggers lockout – the car won’t start.
The causal chain: cyberattack hits cloud servers, cloud servers go offline, calibration service unavailable, devices overdue for calibration trigger lockout, cars don’t start.
Drivers posting on Reddit described the experience. One wrote, per WIRED, that “our vehicles are giant paperweights right now through no fault of ours,” adding that they were “being held accountable at work and feel completely helpless.”
That’s the failure mode: not a compromised device, but a cloud dependency embedded deep enough in the compliance cycle that removing the cloud removes the ability to drive.
The design question
The calibration-requires-cloud architecture is almost certainly not accidental. Ignition interlock programmes are heavily regulated. States need to know that devices are being calibrated on schedule, that data is being logged, that non-compliant drivers are flagged. Centralised, cloud-connected calibration makes that audit trail trivially easy to produce. It’s the compliance-friendly design.
The failure mode is the other side of that design choice.
A fully offline calibration process – one where the device could be calibrated at a service centre without a live cloud connection, syncing data later – would have survived this outage. Drivers would have been inconvenienced, but their cars would have started. The compliance record would have been maintained; it just would have uploaded when connectivity returned.
Whether that architecture is technically feasible within the regulatory constraints of 46 different state programmes is a genuine question. Some states may require real-time data reporting. The regulations weren’t designed with cyberattack resilience in mind.
But “the regulations didn’t require it” and “this was the right design” are different things. The people who designed this system made a choice. That choice had a failure mode. The failure mode played out.
The IoT pattern
This isn’t unique to Intoxalock. It’s a pattern that plays out whenever IoT devices are built with centralised cloud dependencies for core functionality.
The economics make sense from the vendor’s perspective. Cloud connectivity enables remote monitoring, OTA updates, centralised data, reduced device complexity. You don’t need the device to be smart; you need it to be connected. The smarts live in the cloud.
The problem is that this architecture concentrates risk. An attacker who compromises the central servers doesn’t need to compromise 150,000 devices individually. They hit one target and take out all of them simultaneously. The centralisation that makes the system cheap to run and easy to audit is the same centralisation that makes the attack efficient.
This is structurally different from a software bug that affects many users. A software bug can often be worked around – users can update, roll back, use an alternative. When your car won’t start, you can’t workaround your way to work.
The scale of simultaneous failure is also qualitatively different. Pre-cloud IoT devices failed individually, at different times, for different reasons. Cloud-dependent IoT devices fail together, instantly, at vendor-scale, for one reason. The failure distribution collapses.
What happened to affected drivers
Practically: if your car was in lockout before the outage ended, you likely needed it towed to a service centre for calibration. Intoxalock said it would cover costs that were a direct result of the pause. The company set up a texting line at 424-724-4689 for affected customers.
Legally: the concern for many drivers was whether missing a calibration would be reported to the court or probation officer as non-compliance. Intoxalock’s 10-day extension was meant to address this, but the company did not publish detailed guidance on how it would handle state reporting during the outage window. Individual cases will likely depend on state programme rules and whether the driver can document that the missed calibration was caused by the cyberattack rather than deliberate non-compliance.
Intoxalock said it serves roughly 6,000 customers in Connecticut alone, with an estimated 7 to 10 percent due for calibration during the outage window, according to Connecticut Public. Scaled across 46 states, the number of directly locked-out drivers runs into the thousands, though a precise nationwide figure has not been confirmed.
The broader takeaway
Systems with legal consequences attached need different resilience standards than systems without them.
If a streaming service goes down, you find something else to watch. If your ignition interlock device’s cloud service goes down and you miss a calibration, you may be in violation of a court order through no action of your own. The stakes are categorically different.
That asymmetry should flow back into design requirements. Offline fallback for calibration. Defined incident response protocols for state regulators. Clear, pre-agreed procedures for what happens to compliance records during vendor outages. These aren’t features you add later; they’re constraints that should shape the architecture from the start.
Currently, there is no federal standard requiring ignition interlock vendors to maintain service availability or to have resilience plans that account for cyberattacks. State programmes set the device and data requirements; they don’t typically address what happens when the vendor is compromised. That’s a gap.
Intoxalock’s systems are back. Drivers are getting their cars calibrated. The incident will fade. But the architecture hasn’t changed, the dependency hasn’t changed, and the next vendor to get hit will face the same failure mode.
The question isn’t whether this will happen again. It’s which vendor, and how many drivers.
Sources: TechCrunch, WIRED, Connecticut Public, CyberNews