This post is based on ProPublica’s investigation published March 2026, drawing on internal FedRAMP memos, emails, and meeting logs. All quotes from government reviewers are sourced from that reporting.


What’s New This Week

ProPublica published internal FedRAMP documents today revealing that federal cybersecurity reviewers flagged Microsoft’s Government Community Cloud High (GCC High) as having inadequate security documentation as far back as 2020, could not verify its encryption practices after five years of asking, and authorized it anyway in late 2024 – citing the fact that it was already deployed across Washington as the primary reason. The story confirms what the authorization record already suggested: the review concluded not because the questions were answered, but because the product was too embedded to reject.


Changelog

DateSummary
18 Mar 2026Initial publication based on ProPublica’s internal FedRAMP document investigation.

ProPublica published internal FedRAMP documents this week. They show that federal cybersecurity reviewers concluded Microsoft’s GCC High had inadequate security documentation, couldn’t verify its encryption practices after years of requests, and then approved it anyway. The stated reason: it was already too widely deployed to reject.

That is not a security process. It is a procurement process dressed in a lab coat.

What FedRAMP Is Supposed to Do

The Federal Risk and Authorization Management Program was created in 2011 under the Obama administration’s “Cloud First” policy. The premise was straightforward: instead of every federal agency independently vetting cloud products against federal security standards, FedRAMP would run a centralised review. Authorised products would appear on the FedRAMP Marketplace, and agencies could rely on that authorisation as evidence of compliance. “Do once, use many times.”

Microsoft’s Government Community Cloud High is a cloud services suite built for sensitive federal data – the kind of information the government classifies as requiring protection against threats that “could be expected to have a severe or catastrophic adverse effect” if the data were compromised. The Justice Department, Energy Department, and defence sector all rely on it.

Getting GCC High onto the FedRAMP Marketplace took five years. The reasons why reveal a system that was never built to say no to a vendor large enough to make “no” untenable.

“The Package Is a Pile of Shit”

FedRAMP first raised concerns about GCC High’s security documentation in 2020. The specific ask was not exotic: detailed encryption diagrams showing how data is protected as it moves between servers. This is baseline due diligence for any system handling sensitive information. It is not an unusual request.

Microsoft did not provide it. Over the following years, the company produced what reviewers characterised as partial information “in fits and starts.” By late 2024, after roughly five years of this, the internal verdict was damning. Microsoft’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture.” One reviewer was more direct: “The package is a pile of shit.”

The significance of the encryption documentation gap is worth dwelling on. When you cannot verify how a system protects data in transit, you cannot reason about its security. You cannot assess whether a nation-state attacker with access to network traffic could intercept or decrypt that data. You cannot compare its controls against the threat model. You are, in effect, taking the vendor’s word for it – which is not a security posture. It is an assumption.

Too Embedded to Fail

Here is the mechanism that made “no” impossible.

FedRAMP rules allowed federal agencies to deploy GCC High while the review was still ongoing. The Justice Department did exactly this, authorising GCC High for internal use in early 2020 and in doing so placing it on the FedRAMP Marketplace as “in process.” That listing was not a warning label. In practice, it read as an endorsement. Other agencies took their cue from Justice and began deploying it.

By the time reviewers needed to render a verdict in late 2024, GCC High was load-bearing infrastructure across the federal government. ProPublica reports that reviewers explicitly cited the product’s existing deployment as a reason to approve it. The review did not conclude because the questions were answered. It concluded because rejecting the product at that point would have meant tearing out infrastructure from the Justice Department, the Energy Department, and the defence sector simultaneously.

This is the sunk cost fallacy institutionalised as policy. The review process created the conditions that made an honest review outcome impossible.

Engineering organisations run into this exact trap. A business unit adopts a SaaS tool without going through procurement. By the time security gets involved, the tool is already processing production data, integrated into three workflows, and embedded in a contract. At that point, “we can’t verify this meets our security requirements” is technically accurate but operationally useless. The tool stays. The risk stays with it. This is how trust assumptions accumulate in integrated systems – not through deliberate decisions, but through deployment that outpaces review.

Who Is Really Paying Whom

FedRAMP’s review process relies, in part, on third-party assessment organisations – independent firms hired to evaluate whether a cloud product meets federal security standards. In principle, this is sensible: it extends the government’s review capacity and brings in technical expertise. In practice, there is a structural problem.

The third-party assessors are hired and paid by the company being assessed – not by the government.

This is the same dynamic that produced AAA credit ratings on subprime mortgage bonds in 2007. When the entity responsible for assessing risk is financially dependent on passing the thing being assessed, the assessment is not independent. The incentive structure is not subtle. ProPublica found breakdowns “at every juncture” of the review process. The third-party assessor structure is not incidental to that finding.

This is not a new observation about FedRAMP. It has been raised before. What the GCC High story adds is a concrete example of how it plays out at scale, over years, with a vendor large enough that no one in the chain was willing to force the issue.

The Microsoft Track Record

The context in which FedRAMP authorised GCC High is worth stating plainly.

In 2020, Russian intelligence operatives using the SUNBURST backdoor compromised SolarWinds’ software update mechanism and used it to move through networks of Microsoft customers, including multiple federal agencies. The National Nuclear Security Administration – responsible for maintaining the US nuclear weapons stockpile – was among the affected organisations.

In 2023, Chinese hackers tracked as Storm-0558 exploited weaknesses in Microsoft Exchange Online to access the email accounts of a Cabinet secretary and senior government officials. The breach was not discovered by Microsoft. It was discovered by the State Department, which noticed anomalous traffic.

Both incidents were known to FedRAMP reviewers when they authorised GCC High in late 2024. The pattern they represent – nation-state actors systematically targeting Microsoft’s cloud infrastructure as a vector into federal systems – was fully visible. It did not change the outcome.

Tony Sager, who spent more than three decades as a computer scientist at the NSA, put it directly: “This is not security. This is security theater.”

What “FedRAMP Authorized” Actually Means Now

The FedRAMP authorization for GCC High is now actively used as a trust signal. “FedRAMP authorized” appears in procurement evaluations, vendor security questionnaires, and risk assessments across both government and the private sector. It is shorthand for “the federal government’s cybersecurity programme reviewed this and approved it.”

The GCC High story makes clear what that signal actually represents in this case: a five-year review that could not verify the product’s encryption practices, whose internal verdict was “a pile of shit,” and which authorised the product because it lacked the leverage to do otherwise. The authorisation came with what ProPublica describes as a “buyer beware” notice to agencies considering the product – a caveat that does not appear on the FedRAMP Marketplace listing itself.

This matters beyond the federal government. Any organisation using FedRAMP authorization as a due diligence shortcut for cloud vendor evaluation is relying on a signal that may be weaker than assumed – and is certainly weaker than the badge implies. The credential chain failure mode extends beyond specific software vulnerabilities. It includes the institutional trust signals that organisations use to justify not looking too closely.

What Engineering and Security Teams Should Take From This

FedRAMP is still a useful baseline. A product that has gone through the process has at least been subjected to a standardised review framework, which is more than can be said for most vendor security claims. The floor matters.

But the GCC High case reveals the ceiling. A review process that cannot say no to a large enough vendor is not an independent assessment. It is a compliance ritual.

For engineering and security teams making vendor decisions, the practical implications are narrow but important:

Treat FedRAMP authorization as necessary but not sufficient. Verify the specific controls that matter to your threat model. For systems handling sensitive data in transit, that means actually reviewing the encryption documentation – the same documentation FedRAMP reviewers spent five years trying to get from Microsoft and never fully received. If the vendor cannot produce it, or produces it “in fits and starts,” that is a signal worth taking seriously regardless of what any marketplace listing says.

Be especially sceptical of vendors who are too embedded to audit properly. The nation-state attack patterns targeting institutional infrastructure do not care about vendor size. Attackers are not deterred by market share. The fact that a vendor’s product is everywhere is an argument for more scrutiny, not less.

And if your organisation is considering adopting a cloud service while its security review is still in progress – before the review completes – recognise that you are creating the conditions for exactly the dynamic described here. By the time the review renders a verdict, the outcome may already be determined by deployment, not by evidence.

The Shape of the Problem

FedRAMP is still a useful baseline. But the GCC High story reveals a system that found it easier to approve something it couldn’t verify than to reject something it couldn’t afford to reject. That distinction – between a security process and a procurement process that dresses as one – is worth understanding clearly, because organisations across the private sector build their vendor trust models on the same foundations.

The question is not whether to trust FedRAMP-authorized products. The question is what “trust” means when the review process was built to say yes, structured to create deployment lock-in, and staffed by assessors paid by the companies they assess. That question does not have a comfortable answer. But it has an honest one.