Commissioned, Curated and Published by Russ. Researched and written with AI.


What’s New

DateVersionNotes
2026-03-061.0Initial publication

On 5 March 2026, the world’s largest encyclopedia went into read-only mode. Millions of editors locked out. Automated mass deletions across hundreds of pages. An edit summary reading “Закрываем проект” – Russian for “Closing the project” – stamped across the wreckage.

The initial community reports suggested compromised admin accounts, a credential stuffing wave, an active attack. The reality was stranger, and in some ways more instructive: a Wikimedia Foundation staff member accidentally activated a malicious JavaScript file that had been sitting dormant on Russian Wikipedia for eighteen months. Because of how privilege is structured in MediaWiki, that single activation was enough to inject code into the global site JavaScript, affect any user visiting Meta-Wiki, and trigger mass automated page deletions.

Wikipedia was not under external attack. The threat was already inside the system, waiting. That distinction matters a great deal.


What Actually Happened

The chain of events:

In 2023, a malicious JavaScript was created to attack two Russian-language alternative wiki projects, Wikireality and Cyclopedia. In 2024, a user named Ololoshka562 created a page on Russian Wikipedia – user:Ololoshka562/test.js – containing that same script. It sat there untouched for a year and a half.

On 5 March 2026, a Wikimedia Foundation staff member was conducting a security review of user scripts and API limits. During that review, they inadvertently imported and activated the malicious script from their account on Meta-Wiki.

Here is where the blast radius becomes clear. The staff member held the role of global interface administrator. That role carries the ability to edit MediaWiki:Common.js on Meta-Wiki – the global JavaScript file that loads for every user across every Wikimedia project. Once the malicious script was activated under that account, it had access to that file. The script was active for 23 minutes. In those 23 minutes it mass-deleted pages in namespaces 0 through 3, spreading the attack summary across the edit history.

To contain the damage and prevent further spread, the Wikimedia Foundation took every wiki property into read-only mode for approximately two hours. All user JavaScript was disabled globally for most of the day. Affected pages have since been restored. No personal data was breached. No external attacker was involved.

The Wikimedia Foundation’s official statement was admirably candid: “We inadvertently activated dormant code that was then quickly identified to be malicious.” The irony of triggering the attack during a security review was noted explicitly.


Why Privileged Roles Are a Different Threat Category

Most organisations draw a clean line between “compromised regular account” and “compromised admin account.” The difference is assumed to be one of scope: admins can do more bad things.

That framing undersells it considerably.

The Wikipedia incident illustrates why. The global interface administrator role is not just “admin with more permissions.” It is a role that can modify code executed by every other user in the system. When that role is exercised – even legitimately, even accidentally – the blast radius is not “one account’s data.” It is “every user session across every wiki project simultaneously.”

This is a category difference, not a degree difference.

Audit trail gaps. Regular account actions – edits, logins, changes – are logged and auditable. Interface admin changes to global JavaScript are logged too, but the downstream effects are harder to trace. A malicious script running in user browsers does not leave server-side logs. It is invisible until pages start disappearing and editors start raising the alarm in Discord servers.

Trust propagation. Privileged accounts are trusted by the systems and users around them. Code deployed by a trusted account is executed without question. A junior developer’s commit goes through review. A global interface admin’s JavaScript change goes live immediately, trusted by the architecture itself.

Persistence without presence. The Wikimedia attacker did not need to maintain access to cause damage. The malicious script sat dormant for eighteen months. When it was activated – even by accident – it immediately had the permissions it needed. The threat actor had long since left the building. The loaded gun remained.

In your organisation, ask which accounts carry trust that propagates. Not just “can delete data” – but “can execute code that other accounts will run,” “can modify configuration that other services consume,” “can issue tokens that other systems will honour.” Those are the accounts that need the most scrutiny.


The Code Execution Path Nobody Audits

This incident is not primarily a story about credentials. No password was stolen. No MFA was bypassed. The attack vector was trusted code execution – specifically, the MediaWiki user script system.

Wikipedia’s user script model works like this: editors can create JavaScript files in their user space. Other editors can import and run those scripts. Interface administrators can deploy scripts to global pages like MediaWiki:Common.js, which runs for all users. It is a powerful, community-enabling system. It is also a supply chain.

The analogy to software engineering is direct. You have npm packages, pip dependencies, Terraform modules, CI pipeline actions. Each one is code your systems execute with some level of implicit trust. The question is not whether your systems execute third-party code – they do, everywhere. The question is: which trusted accounts have permission to introduce that code, and how is that introduction audited?

The failure mode here is not “someone was phished” or “a token was stolen.” It is: a legitimate account with legitimate permissions ran code it should not have, because the code was disguised as something else, and no human reviewed it before it executed.

This is why MFA arguments miss the point in many modern incidents. MFA protects the login. It does not protect against:

  • A legitimate user running code they did not inspect
  • A legitimate user with stale access making changes outside their current role
  • Dormant content (scripts, configurations, dependencies) that becomes malicious between creation and execution
  • Automated pipelines running under privileged service accounts

The Wikimedia incident was a human activating dormant malicious code. Your incident might be a pipeline pulling a dependency that was hijacked three months ago. The shape is the same.


Privileged Account Sprawl

Wikipedia has thousands of administrators globally. They are volunteers. They earn the role through demonstrated contribution to the community, and they retain it indefinitely unless they resign or are removed for cause. There is no offboarding process because there is no employer. There is no HR system that notices when a long-term admin stops contributing actively but retains their elevated permissions.

Interface administrators – the subset with the most dangerous permissions – are fewer. But the same structural problem applies. People accumulate roles. Roles persist.

This pattern is not unique to volunteer platforms. It is endemic to every organisation that has been running long enough.

In commercial organisations: former employees whose deprovisioning missed a service account. Contractors who finished their engagement but kept portal access. Developers promoted to engineering manager who still have production database access from their previous role. Test automation accounts with admin permissions because someone needed to debug a pipeline eighteen months ago and never revoked the access.

The problem compounds over time. Access accumulates like sediment. Periodic reviews happen infrequently, if at all. When they do happen, they focus on the most obviously suspicious accounts, not the dormant legitimate ones that quietly accumulated permissions for years.

Dormant accounts are not just a negligible risk. They are high-value targets. An attacker who compromises a dormant privileged account can often operate for extended periods without detection – the account’s inactivity baseline means normal-looking access appears normal.

In the Wikimedia case, the malicious script was dormant content, not a dormant account. But the structural analogy holds: privileged capability that is not actively monitored is a risk that grows over time.


What to Actually Do

Practical, in priority order.

1. Map your code execution paths, not just your accounts.

Who can introduce code that runs with elevated trust? That includes: CI/CD service accounts, package managers, global configuration files, browser-executed JavaScript (especially in admin portals), Terraform/Ansible templates, and anything described as a “shared library.” Map these. Know which humans or service accounts can modify each one.

2. Require review for privileged code changes.

No single account should be able to modify globally-executed code without a second pair of eyes. This includes infrastructure-as-code, CI pipeline definitions, and – as Wikimedia is now learning – global JavaScript files. The review requirement should be enforced by the system, not by convention.

3. Implement short-lived sessions for privileged operations.

Regular user sessions lasting weeks or months make sense for usability. Privileged sessions – especially for interface admin, infrastructure access, or production changes – should be short-lived and require re-authentication before sensitive operations. If a session token is stolen, short expiry limits the window.

4. Run a dormant account audit. Right now.

Pull a list of every account with elevated permissions. Flag any that have not authenticated in 90 days. Flag any that have not performed a privileged action in 180 days. Do not just look at humans – include service accounts and API tokens. For each flagged account: confirm current need, reduce permissions to least privilege, or revoke entirely.

5. Separate roles by blast radius.

Global interface administrator and regular administrator should not be the same permission level. If a role can modify code or configuration that affects every user, it needs additional controls: hardware keys required, sessions expire after 4 hours, all actions logged with explicit audit trail, change notification to a security channel.

6. Model your break-glass scenarios.

If your most privileged accounts are compromised or unavailable, what is your recovery path? Do you have break-glass accounts with documented procedures? Are those accounts themselves properly secured and audited? Break-glass accounts are often created in a hurry during an incident and then forgotten – which makes them excellent targets.

7. Content auditing for supply chain risk.

Any code your privileged accounts can run or import should be subject to review before execution. This is uncomfortable for platforms built around community contribution, but it is increasingly necessary. The Wikimedia script had been publicly visible for eighteen months. A proactive audit would have found it.


The Honest Conclusion

The Wikimedia Foundation handled this incident well. They were transparent, they contained it quickly, they restored affected content, and they publicly acknowledged the irony of triggering the incident during a security review. The 23-minute window of active damage, two-hour read-only window, and full recovery is, honestly, a reasonable incident response outcome for a platform of this scale.

But the structural vulnerabilities remain. Not just for Wikimedia – for every organisation with accumulated privileged access, long-running sessions, and code execution paths that are trusted because of the account they come from rather than the content they contain.

Wikipedia’s volunteer structure makes the problem visible in an unusual way. No offboarding. No role review cadence. Permissions that last until someone explicitly removes them. But the commercial equivalents are everywhere: the forgotten service account with production access, the senior engineer who left six months ago whose API key still works, the CI pipeline running under an account with more permissions than it needs because it was easier to grant broad access than figure out the minimum.

Your admin layer is your highest-value target. Not because admins are malicious, but because the blast radius of a mistake – or an attack, or a dormant malicious script running under the right permissions – is orders of magnitude larger than a regular account compromise.

When did you last audit it?


Commissioned, Curated and Published by Russ. Researched and written with AI.