Your algorithm doesn’t know what success looks like
Google’s automation is now that good at finding conversions, that it’s turning into a problem.
Performance Max will hit your tCPA. Smart Bidding will deliver the volume you asked for. And you’ll still lose money, because the system optimised exactly as instructed for a metric that doesn’t map to business outcomes.
This isn’t an automation failure, it’s a specification failure. And it’s the gap most paid media teams still haven’t closed.
The awkward truth of 2026 is that the technical execution layer, bid management, audience building, basic creative iteration, has been effectively commoditised. Not by “AI agents” but by five years of steady platform improvements that finally crossed the competence threshold.
What hasn’t been commoditised is the layer above: deciding what the system should actually optimise for, defining where it can and can’t operate, and interpreting results in context the algorithm can’t see.
Most teams are still structured around the execution layer. That’s the mismatch worth fixing.
Three levers you still control
Platform automation has one consistent weakness: it treats your inputs as ground truth. Feed it garbage signals, vague boundaries or no commercial context, and it will optimise toward outcomes you don’t want. Your job is to control the inputs.
Signal quality
The algorithm only knows what you tell it. If your conversion event is “form submission” you’ll get form submissions, including the junk leads and the competitors doing research.
The fix isn’t better bidding strategy; it’s better signal infrastructure.
This means:
- Feeding offline conversion data back into the platform
- Weighting conversions by margin or predicted LTV where possible
- Segmenting conversion actions so the system can distinguish between a £50 order and a £5,000 order
You’re not managing bids anymore, you’re teaching the bidder what “good” looks like for your business. The teams doing this well are seeing it compound: better signal quality improves auction selection, which improves traffic quality, which improves signal quality further.
Boundary definition
Automation defaults to efficient. Left unconstrained, it will find the cheapest path to your target which often means inventory you’d never consciously choose, placements that cannibalise organic, or audiences that convert once and never return.
Your job is to define where the system cannot go:
- Placement and inventory exclusions
- Frequency constraints that prevent saturation
- Audience exclusions that protect incrementality (existing customers, recent converters, employees)
- Budget boundaries that prevent over-concentration in single channels or tactics
This is unglamorous work. It’s also where the $26B in reported programmatic waste lives, not in “AI failure” but in boundaries no one bothered to set.
Contextual interpretation
The platform will tell you that CPA spiked 40% last Tuesday. It won’t tell you that your main competitor launched a flash sale, your product went out of stock for six hours or there’s a Reddit thread calling your brand a scam.
Algorithms operate in a commercial vacuum; they see patterns in data, they don’t see the market.
This is the interpretation layer humans still own:
- Explaining performance shifts that don’t match input changes
- Distinguishing between statistical noise and actual signal
- Knowing when to override the algorithm’s recommendation because you have context it doesn’t
The teams that treat AI output as “the answer” rather than “a hypothesis to interrogate” are the ones making expensive mistakes.
Be tactical when it comes to MMM outputs
There’s been a lot of debate about whether MMM can be “the source of truth” for everything.
It can’t – and it shouldn’t try to be. MMM is at its best when it’s used tactically for the decisions it’s designed for, such as:
- Budget setting (quarterly/annual)
- Scenario planning (“If we move £X from A to B, what happens?”)
- Channel role clarity (what drives short-term vs longer-term demand)
- Test prioritisation (where experiments will tighten uncertainty fastest)
Where MMM is not the right tool:
- Week-to-week bid optimisation
- Creative-level learnings
- Diagnosing single-campaign performance in isolation
The most effective approach is layered:
- MMM for strategic allocation and guardrails
- Experiments for incrementality confirmation where feasible
- Platform and creative testing for execution speed
When those layers work together, MMM stops being a debate and becomes a decision engine.
Is there a monetary element?
Finance teams don’t just want insight – they want a number they can defend. And that’s where MMM often gets mishandled. If you want MMM to influence senior decision-making, give leaders money-grade framing:
- Ranges (expected / upside / downside)
- Payback windows (weeks vs months)
- Risk statements (“If we cut brand spend by X, here’s the likely impact range and the uncertainty”)
This allows you to talk about MMM like an investment committee would:
- Not “what’s the ROI?”
- But “what’s the expected return and what’s the downside risk?”
It’s also worth identifying where you can add monetary clarity through structure:
- Separating baseline demand vs incremental impact
- Controlling for promotions, pricing, and distribution
- Clearly stating what’s in-scope and out-of-scope
The goal is not to pretend MMM is perfectly precise. The goal is to make budget choices more robust than gut feel.
So what?
If your team’s value proposition is still “we’re really good at managing campaigns” you’re competing with the platform’s own automation, a fight you’ll lose on cost and speed.
The defensible position is upstream: signal architecture, boundary governance, commercial interpretation. That’s harder to hire for, harder to train, and harder to replace with a button.
Structure your team around that, or watch margins compress until there’s nothing left.
Want to know more?