
For most of the history of modern advertising, we had a fairly straightforward deal with media.
Television interrupted us at predictable intervals. Two ad breaks. Maybe three during a longer movie. Everybody understood the bargain. The content was subsidized by advertising, and in exchange we tolerated a certain amount of interruption. It wasn't always pleasant, but it was bounded. Structured. Negotiated.
The internet broke that structure.
Today you can open a ten-minute video on YouTube and get hit with an ad before the video starts, another one a minute in, two more somewhere in the middle, and then recommendations, overlays, banners, and sponsored blocks wrapped around the entire experience. Sometimes the same irrelevant ad repeats itself five or six times across the same session.
And this is the strange part.
These platforms supposedly know enormous amounts about us. They know what we search for, where we go, what we buy, what we watch, who we talk to, how long we pause on videos, and what we almost clicked on but didn't. Yet despite all of this data, digital advertising still often feels remarkably clumsy.
That cognitive friction is what I've been calling the Attention Tax. It is the small but constant mental cost we pay in exchange for "free" digital infrastructure. And honestly, we've mostly accepted it. We negotiate with it every day. We skip ads. We tune them out. We mentally filter the noise and move on. But I think AI is about to change this bargain quite dramatically.
AI might actually reduce the Attention Tax
For a long time, internet advertising was still mostly probabilistic.
Platforms would show enough ads to enough people and hope that some percentage converted. Even sophisticated targeting systems were still operating with relatively rough approximations of human intent.
Now something different is beginning to happen.
Tencent is integrating its Hunyuan AI models deeply across WeChat. Meta is increasingly automating targeting, bidding, creative generation, and campaign optimization through systems like Advantage+. Google is restructuring Search itself around AI-generated answers and recommendation flows.
And importantly, advertisers are increasingly paying these systems for outcomes rather than exposure. That shift matters.
For years, advertisers largely paid for impressions, clicks, reach, and probability. Now platforms are increasingly saying something closer to: tell us what a customer is worth to you and let the AI handle the rest. Who sees the ad. When they see it. What emotional framing works. Which image converts. Which sequence performs better. Which moment in the day creates the highest likelihood of action.
Early signals show that this approach is working, really well.
Tencent's advertising revenues jumped significantly as Hunyuan-based targeting improved conversion performance across the WeChat ecosystem. Meta's automated advertising products continue growing because advertisers are seeing measurable outcome improvements. Google is aggressively integrating AI into Search precisely because predicting intent more accurately is economically valuable.
A bad ad is easy to ignore. A very good ad isn't.
This is the part I think many critics of advertising miss. People do actually want discovery. People want to find useful products. Useful services. Better tools, software, healthcare, education, financial products, entertainment. The list goes on. In theory at least, highly relevant advertising could reduce the Attention Tax substantially because the interruption itself starts becoming useful information rather than noise.
And honestly, that sounds like a better internet.
But this creates a much stranger problem - the manufacturing of intent
The more I think about this, though, the more I feel like AI targeting introduces a completely different tension. Because a bad ad is easy to ignore. A very good ad isn't.
At some point the advertisement stops feeling like advertising at all. It starts feeling more like intuition. Recommendation. Discovery. Convenience. The system begins surfacing things at moments where they feel naturally relevant to your life. And this is where the line starts becoming blurry. Is the platform helping you discover what you already wanted? Or is it slowly shaping what you end up wanting next?
Those two things are uncomfortably close to one another. Especially once systems become extremely good at prediction. If an AI system can reliably identify what you are likely to buy, desire, click on, emotionally respond to, or aspire toward before you consciously formulate those intentions yourself, then something deeper starts happening than traditional advertising.
The platform is no longer just competing for your attention. It is increasingly participating in the construction of your intent. And this is the part that feels genuinely unresolved to me.
Because the upside here is very real. A world with dramatically less irrelevant advertising would probably be better for almost everybody. Users waste less time. Advertisers waste less money. Discovery becomes more useful. Platforms become less noisy. Everybody wins.
Until the systems become so good at prediction that users can no longer clearly distinguish between: something they independently wanted and something the machine became extremely good at making feel like their own idea.
So the interesting question is not whether prediction is good or bad. It is how we find the sweet spot: systems that support real intent without quietly becoming the authors of it.
And if a system knows what you are likely to want before you do, then how exactly would you ever know where the intention originally came from? That feels like the real negotiation we are entering now. AI targeting clearly works and alleviates the attention tax. But how much of our future intention we are comfortable allowing machines to quietly shape on our behalf.
On the downside, maybe the Attention Tax does not disappear. Maybe it only changes form.
In the old version, the price we paid was distraction. We gave platforms our attention and, in return, accepted a certain amount of irrelevant noise.
In the AI version, the tax could become quieter and harder to see. The ads may become more useful. The recommendations may become better. The interruptions may feel less like interruptions. But the price we pay may shift from wasted attention to shaped intention.
And that is a much harder bargain to understand, because if a system becomes good enough at predicting what we want before we fully know it ourselves, then the question is no longer just whether the ad was relevant. The question is whether the intention was still fully ours.
The future: segments of one
There is one more force worth naming, because it changes the shape of everything above. The cost of producing an ad — the image, the copy, the cut, the variation — is collapsing. AI is doing to creative production what it already did to targeting. A campaign that used to need a brief, a shoot, an edit, and a media plan can now be generated, varied, and re-cut in minutes, at close to zero marginal cost.
At the same time, the volume of signal on each individual keeps compounding. Every scroll, pause, search, purchase, dwell, and skip adds another stroke to the portrait. And the systems reading that portrait are getting better — not just at predicting what you will click, but at sense-making across taste, mood, identity, and desire.
Put those two curves together and the economics flip. Because advertisers are now paying for outcomes, and outcomes are worth more than ever, the budget that used to fund one ad for a million people can fund a million ads for one person each. Production cost folds into the conversion margin. The unit of advertising stops being the campaign, or even the segment. It becomes the individual.
The unit of advertising is no longer the campaign. It is the person.
This is not a thought experiment. The experimentation loop that makes it work — generate, expose, measure, iterate — is already cheap enough to run millions of variants in parallel. Platforms can test which value proposition lands with which person, in which moment, in which emotional register, and then keep only the variant that converted. The losing variants cost almost nothing. The winning variant funds itself.
What Shein did to fashion — read demand in near real time, produce in micro-batches, kill what didn't sell — is the rough template. Except the next version runs on attention rather than inventory, and the micro-batch is one person.
We are already well underway to this. The interesting question is no longer whether segments of one are coming. It is what it means to live inside a system that can manufacture a perfectly tailored proposition for you, on demand, at the moment you are most likely to say yes.
Where do we go from here?
No tidy answer here, but three things look real to us.
Don't bank on users opting out. Better targeting and better services genuinely help people — that is the whole point. Asking users to walk away from a bargain that is, on most days, working in their favour is a losing strategy, and probably the wrong one to fight.
Educate inside the moment, not outside it. The leverage is in making people cognizant of the transaction while they are inside it — small, well-timed surfaces of awareness that don't break the experience but quietly restore agency. This is one of the spaces we are actively working on at Fabric.
The hardest piece is regulation. Overt manipulation and overt harm are, relatively speaking, the easy cases — you can see them, name them, and write rules against them. The harder case is the consensual one. When users opt in, and a system gradually shapes their desires, their orientation, their next move, where exactly did the persuasion end and the manipulation begin? In that subjective middle, regulation slips easily into overreach. We don't think there is a clean policy answer yet, and we are skeptical of anyone who says there is. The honest work is to keep the question open while the systems keep moving.