The Incentive Problem

Google's automation is genuinely impressive. Smart bidding, Performance Max, broad match with AI-assisted query expansion - these tools can drive real results, and in many accounts, they do. That said, it would be naive to treat Google as a partner whose interests are perfectly aligned with yours.

Google is an ad auction business. It earns revenue when you spend money. Its algorithms optimize toward the objective you set - but the definition of that objective, the cleanliness of your conversion data, and the structural guardrails you build around the algorithm are entirely your responsibility. Most advertisers hand Google a loosely configured campaign with a vague objective and then wonder why spend is high and returns are inconsistent.

The problem is never automation itself. The problem is automation without discipline. Google's tools are powerful enough to scale a profitable account quickly - and powerful enough to burn through a budget just as fast if the inputs are wrong. The discipline is in the setup.


Smart Bidding Works - But Only With the Right Inputs

Smart bidding - Target CPA, Target ROAS, Maximize Conversions - is a machine learning system that predicts conversion probability in real time and adjusts bids accordingly. It works. But it has a dependency that most advertisers underestimate: it needs enough conversion signal to make meaningful predictions.

Google's own threshold is approximately 50 conversions per campaign per month before Target CPA or Target ROAS bidding stabilizes. Below that volume, the algorithm is extrapolating from too little data. You get erratic delivery, wide swings in CPA, and campaigns that either over-spend chasing signal or under-deliver while waiting for it.

The correct approach depends on where you are in the volume curve:

Monthly Conversion Volume Recommended Bid Strategy Rationale
Under 20/month Maximize Clicks Build traffic and impression data before asking the algorithm to optimize conversions it has not seen.
20โ€“50/month Maximize Conversions (no target) Let the algorithm spend toward conversions without constraining it to a target it lacks the data to hit reliably.
50โ€“150/month Target CPA Enough signal to set a realistic cost-per-acquisition target. Start conservative - set your Target CPA 20% above actual CPA to give the algorithm room.
150+/month Target ROAS Sufficient volume for revenue-weighted optimization. Requires accurate conversion values. See ROAS benchmarks before setting targets.

The second input that breaks smart bidding: the wrong conversion actions. If your "conversion" is a page view, a 30-second session, or a checkout initiation - not an actual purchase or qualified lead - the algorithm optimizes toward those events and finds plenty of them. You get impressive conversion volume and poor business results. Audit your conversion actions before touching bids.

Conversion action hygiene

Mark only revenue-generating or pipeline-generating events as "Primary" conversion actions. Micro-conversions like add-to-cart or video views should be marked "Secondary" - tracked for insight but excluded from bid optimization. Mixing primary and secondary actions in a single smart bidding campaign is one of the fastest ways to inflate conversion numbers and degrade actual performance.


Performance Max: The Campaign That Needs Guardrails

Performance Max is Google's all-surfaces campaign type - Search, Display, YouTube, Shopping, Gmail, Maps - driven by a single asset group and optimized by Google's AI toward your stated conversion goal. It is also the campaign type most likely to make your account look good while your business results stay flat.

The core issue: PMax defaults to chasing the easiest conversions available to it. And the easiest conversions in most accounts are branded search queries - people already searching for your brand name who were going to buy anyway. PMax claims credit for these conversions, reports high ROAS, and gives you the impression that the campaign is working. Meanwhile, your actual new customer acquisition has not moved.

The mandatory guardrails:

Performance Max Setup Checklist
01
Brand exclusions at the campaign level Add all brand name variants as negative keywords within your PMax campaign. Force branded traffic to a dedicated Brand Search campaign where you can control bids, measure incremental conversions, and keep CPA low without inflating PMax metrics.
02
Audience signals, not audience targeting PMax treats audience inputs as signals, not hard constraints - it can serve ads outside your defined audiences. Use your customer lists, site visitors, and in-market segments as signals to bias the algorithm, but do not expect them to function as exclusions.
03
Pull your Search Terms report weekly PMax now surfaces a search category breakdown (not full query-level data, but enough to identify waste patterns). Review it weekly. Queries that are irrelevant or cannibalizing other campaigns should trigger account-level negative keyword additions.
04
Separate asset groups by intent or product category Do not dump everything into one asset group. Separate prospecting assets from retargeting audiences, and separate product categories that have meaningfully different margins. The algorithm allocates differently - and you learn more - when asset groups reflect distinct conversion intents.

If you are running PMax alongside a standard Shopping campaign, know that Google's guidance is to let them coexist - but PMax will typically get priority in the auction. If Shopping is your primary revenue driver, test PMax incrementally rather than switching entirely. Watch your overall Marketing Efficiency Ratio (total revenue รท total ad spend) as described in the attribution problem framework, not just PMax's self-reported ROAS.


The Negative Keyword List That Pays for Itself

If you had to identify the single highest-ROI maintenance task in a Google Ads account, it is building and updating a tiered negative keyword list. Most accounts have a shallow list of obvious negatives - "free," "jobs," "how to" - added at launch and never touched again. Months of wasted spend accumulate quietly underneath.

The structure that works:

  • Account-level shared negative list: Queries that should never trigger any campaign. Competitor brand names you do not want to bid on, irrelevant industry terms, navigational queries unrelated to your product, job-seeking modifiers. Build this once, apply it everywhere.
  • Campaign-level negatives: Terms that make sense in one campaign but would cannibalize another. If you have a brand campaign and a non-brand campaign running simultaneously, each needs negatives that prevent the other's traffic from bleeding over.
  • Ad group-level negatives: Used sparingly for precise match-type sculpting within a campaign. Less necessary in 2026 with consolidated campaign structures, but still valuable when a specific keyword generates irrelevant query variants.

The maintenance cadence matters as much as the initial build. Pull your search term reports every 7 days. Flag any query with spend and zero conversions that has accumulated meaningful cost. Add clear non-converters to the appropriate level of your negative list. Done consistently over a quarter, this practice alone can meaningfully reduce wasted spend without touching a single bid.

The accounts that perform best on Google are not the ones who bid most aggressively. They are the ones who have most precisely defined where they do not want to appear.

Campaign Structure That Does Not Fragment Your Signal

For years, the conventional Google Ads wisdom was to segment campaigns and ad groups as granularly as possible - single keyword ad groups (SKAGs), exact match everything, tight thematic silos. This made sense in a world of manual bidding, where you needed granular control to set the right bid for every query.

Smart bidding changed the math. Granular structures that once provided control now starve the algorithm of signal. When conversions are spread across 40 campaigns that each see 3 conversions per month, no campaign has enough data for smart bidding to function. The algorithm defaults to impression-chasing behavior, and your CPA climbs.

The modern approach: consolidate campaigns to ensure each has meaningful conversion volume. Broad match keywords, combined with a solid negative keyword list and a Target CPA or Target ROAS bid strategy, can cover the query landscape that used to require dozens of exact-match ad groups. The algorithm's ability to recognize high-intent queries from a broad match seed is genuinely better than it was three years ago - provided you give it the conversion data to learn from.

Consolidation does not mean chaos. It means fewer campaigns with more signal per campaign, tighter negative keyword scaffolding to control where broad match expands, and clear campaign-level objectives that align to distinct business goals (brand vs. non-brand, prospecting vs. retargeting).


Setting Targets Based on Your Economics, Not Your Wishlist

The most common smart bidding failure pattern: setting a Target ROAS or CPA that sounds good but does not reflect what the algorithm can actually achieve given your conversion volume, competition, and audience size. The algorithm responds to an unachievable target by dramatically restricting impressions - it would rather not show your ad than show it to someone it cannot confidently predict will convert at your target.

The right way to set a Target ROAS: start from your actual business economics. What gross margin does your product operate at? What blended ROAS do you need to be profitable after COGS, fulfillment, and overhead? That is your floor - the minimum ROAS below which you are losing money on every order. For a practical starting framework, the Noble Growth Ad Calculator can help you derive your break-even ROAS from real margin inputs.

Once you have your floor, set your initial Target ROAS approximately 10โ€“20% above your actual recent ROAS - not at your aspirational ceiling. This gives the algorithm a target it can realistically pursue while learning. Ratchet the target upward gradually (no more than 15% at a time) as performance stabilizes. Aggressive target increases create instability; gradual increases compound performance over time.

Finally, remember that Google's reported ROAS operates on its own attribution model. As outlined in detail in the attribution problem piece, platform-reported ROAS and real business impact are not the same number. Use platform ROAS to make bidding decisions within the platform. Use your Marketing Efficiency Ratio - total revenue divided by total ad spend - to make budget decisions across your business. Keep those two numbers distinct and you will stop chasing metrics that cannot tell you whether your business is actually growing.

Google's tools are among the most powerful distribution mechanisms ever built. They are also exceptionally good at spending money. The discipline - clean conversion data, realistic targets, maintained negatives, consolidated signal - is what determines which one you get.


Want an expert eye on your Google Ads account?

We audit campaign structure, conversion tracking, bidding strategy, and wasted spend - and give you a clear priority list for what to fix first.

Get my free audit →