This post was triggered by Shubham Bose’s essay The 49MB Web Page and John Gruber’s commentary Your Frustration Is the Product. Both are worth reading in full before this one.


What’s New This Week

Bose’s network waterfall teardown of the New York Times homepage – 422 requests, 49MB, two minutes to settle – went viral this week after Gruber amplified it. The piece landed because it put precise numbers on something engineers have felt for years but rarely documented this rigorously. Nothing here shifts the thesis; everything here confirms it.


Changelog

DateSummary
19 Mar 2026Initial publication.

The New York Times homepage requires 422 network requests and 49 megabytes of data. It takes two minutes to settle. Shubham Bose did the network waterfall analysis this week, John Gruber amplified it, and the reason it resonated is simple: everyone with an ad-blocker has felt this, and everyone without one is currently experiencing it.

49MB is not an accident. It is not the result of bad engineering. It is the result of engineering that worked exactly as specified – and that is the problem worth examining.

The engineers did their jobs correctly

Before you scroll past this thinking it’s another “the web is bad” post: it isn’t. The web being bad is a symptom. The cause is something more specific and more instructive.

The developers who built NYT’s ad platform didn’t produce 422 network requests and a 49MB payload because they were incompetent. They built a system that maximises CPM yield through programmatic auction. They optimised time-on-page through engagement mechanics. They minimised reader bounce through notification traps and interstitials. The system downloads 5MB of tracking JavaScript before serving article content because that tracking infrastructure is what makes the auction work – and the auction is what pays for the journalism.

Every one of those decisions is defensible in isolation. You can walk through each one in a quarterly review and justify it. The programmatic auction increases revenue per impression. The autoplay video increases time-on-page. The newsletter modal increases subscription conversion. The sticky video player that follows you down the article and refuses to die generates pre-roll ad impressions that carry the highest CPMs on the page.

These are correctly-executing systems. The engineers who built them are, by any reasonable measure, good at their jobs. The objective function was “maximise CPM per session” and they maximised it.

The problem is the objective function.

Goodhart’s Law at scale

“When a measure becomes a target, it ceases to be a good measure.” Charles Goodhart articulated this in the context of monetary policy in 1975. The ad-tech industry has spent the last fifteen years demonstrating it at global scale.

Time-on-page started as a reasonable proxy for reader engagement. If someone spends three minutes reading your article, they probably found it valuable. That’s a plausible correlation. The ad-tech ecosystem operationalised this proxy – built auction mechanics around it, tied CPM rates to it, created reporting dashboards and engineering KPIs around it – and the moment it became the target, it stopped measuring what it was supposed to measure.

Once time-on-page is the metric, the rational engineering response is to maximise time-on-page. Autoplay video you didn’t ask for increases time-on-page. The modal demanding you subscribe to the newsletter before you can continue reading increases time-on-page. The sticky header, the cookie banner in the bottom 30% of the screen, the “Continue Reading” button that triggers a new ad load and registers as an engagement event – all of these increase time-on-page. None of them correlate with reader satisfaction. All of them are individually defensible in a metrics review.

Bose captures it cleanly: “Every hostile UX decision originates from this single fact. The longer you’re trapped on the page, the higher the CPM the publisher can charge. Your frustration is the product.”

The frustration is not a side effect. It is the intended output of a correctly-specified system.

The print/digital split is the proof

Gruber’s sharpest observation isn’t about the numbers. It’s about the comparison.

The New Yorker’s print edition is one of the most respectful reading experiences in media. The typography is careful, the layout gives the prose room to breathe, the advertising is present but restrained. You can sit with a long-form piece and the magazine doesn’t interrupt you every two paragraphs with an autoplay video unrelated to what you’re reading.

The New Yorker’s website has autoplay videos unrelated to the article between paragraphs.

Same organisation. Same editorial team. Same audience. Same subscribers, many of them paying for both. The content values didn’t change when it moved to digital. The incentive structure did.

Print has no CPM mechanism. There is no programmatic auction running in the background of a print magazine rewarding the publisher for making you angrier. The economics of print are: print X copies, sell Y ads, charge Z for subscriptions. The reader’s experience doesn’t feed back into revenue at a millisecond timescale. There is no system sitting there, watching whether you paused on page 47, and adjusting the layout to trap you there longer.

The Guardian’s mobile website, according to Bose’s analysis, sometimes leaves 11% of the screen for article content. Four lines of article text visible at once. The Guardian’s print edition has none of this. Same organisation. The metric differences produce the design differences. Full stop.

The auction system creates a race to the bottom

Publishers don’t operate in isolation. They compete in a programmatic ad auction where every publisher is bidding against every other publisher for the same ad inventory – and the auction rewards engagement mechanics that most publishers find distasteful.

Here’s the structural problem: if you are a publisher that improves reader experience by reducing ad density, you get lower CPM revenue. Your bounce rate might improve, your reader satisfaction might improve, your long-term subscriber retention might improve – but none of those are what the auction is measuring this quarter. What the auction measures is time-on-page and viewable impressions, and you just voluntarily reduced both.

The rational response is to match industry norms. The most aggressive player in the auction – the one with the most interstitials, the stickiest video player, the most notifications prompts – sets the floor for what “normal” looks like. Everyone else matches or accepts lower revenue.

Bose puts it this way: “The publisher is held hostage by incentives from an auction system that not only encourages but also rewards dark patterns.”

This isn’t a failure of individual publisher ethics. This is a Nash equilibrium where the dominant strategy for every individual actor produces a collectively catastrophic outcome. Individual publishers can’t defect from this equilibrium without revenue consequences that their boards won’t accept.

The AI content wave tightens the ratchet

Post-2025, this dynamic has an accelerant.

The explosion in AI-generated content has increased supply dramatically in the programmatic auction market. More pages competing for the same ad inventory means lower per-page CPMs across the board. The GPT-4-generated listicle farms competing in the same auction as the New York Times pull down average CPMs for everyone. Publishers with genuine content quality – journalism that takes weeks to produce – find themselves priced against content that took 30 seconds to generate.

The response, for publishers whose revenue depends on the auction, is to compensate with more aggressive engagement mechanics. Squeeze more CPM out of the readers they do have, because they’re getting less per-impression from the market. The adversarial design ratchet tightens another notch.

This is the dead internet problem applied to publisher economics: AI content floods the supply side, drives down per-unit value, and creates pressure on quality publishers to behave more like the low-quality publishers they’re competing against. The auction doesn’t discriminate between them.

The engineering lesson

The 49MB web page is the canonical example of correctly-specified, incorrectly-targeted engineering. This distinction matters for anyone building systems with engagement metrics – which is most of us.

The mistake is measuring the proxy instead of the thing. Time-on-page is a proxy for reader satisfaction. Viewable impressions are a proxy for advertising value. Click-through rate is a proxy for user interest. These correlations held before they became targets. Once they became targets, engineers optimised toward them directly, and the correlation broke.

The engineering lesson isn’t “don’t use metrics.” It’s: measure the thing you actually care about, not the proxy.

Did the reader find what they were looking for? Did they leave having understood something they didn’t before? Would they come back? These are harder to measure than time-on-page. They don’t feed cleanly into a programmatic auction. They require qualitative signals alongside quantitative ones. But they’re the thing – and when you optimise for the proxy instead of the thing, you end up with systems that are technically excellent and practically adversarial.

This pattern appears everywhere engagement metrics exist. Recommendation algorithms that optimise watch time produce outrage and filter bubbles. Support ticket systems that measure time-to-close produce premature closures and frustrated users. Ad-tech surveillance infrastructure that optimises CPM produces 49MB web pages. In each case, the engineers were doing their jobs correctly. The objective function was wrong.

Bose notes that “no individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.” That’s exactly right. And it’s exactly why you have to get the objective function right before you start building.

What the 49MB web page actually is

The 49MB web page isn’t a failure of engineering. It’s a success.

The auction is working. The CPMs are being maximised. The time-on-page metrics are strong. The quarterly review will show green numbers. The system has achieved everything it was designed to achieve.

The problem is what it was designed to achieve.

The engineers who built it were solving the problem they were given. The product managers who defined the metrics were responding to the incentives they were given. The executives who approved the architecture were responding to the revenue model they were given. The revenue model was determined by an auction system that rewards adversarial design.

At every level, locally rational decisions. Collectively catastrophic outcome.

If you’re building a system with engagement metrics right now – and most engineers are – the question worth sitting with is: are you measuring the thing, or the proxy? And what happens to your product when the proxy becomes the target and the correlation breaks?

The answer is probably somewhere between “we’d notice it in the data” and “it’s already happened and we haven’t noticed yet.”