If you take your PPC campaigns seriously (why wouldn't you?), you're always testing.  Always.  It's the only way to accomplish long-term growth and gain insights that will translate into all of your other marketing channels.  One problem that has been inherent since the beginning of PPC is the inability to do true A/B split-testing with variables like keywords, bids, ad text, ad groups, match types, dynamic keyword insertion, etc.

Yes, you could test them, but only by comparing metrics from different time periods (except for ads).  For example, you'd have to run ads at a certain bid price for a while, change it, and run them at the new bid price for a while.  Then, you'd have to compare the results from different time periods.  The problem? When you would compare the results, you would be likely to assume the differences in those key metrics to be the result of the changes.  But fluctuations in demand, shifts in competitor tactics, and uncontrollable circumstances (special events, etc.) can complicate things.

Google's example of this involves advertising for soccer balls.  "Let's say you're advertising soccer balls, and you decide to increase your bids to get more traffic. Two days later, the World Cup starts, and your clicks and impressions increase substantially. If you had simply raised the bids in your campaign without running an experiment, you wouldn't know how much of the increase in traffic is due to the World Cup, and how much is a result of you increasing bids."

Let's say you raised your bids at the beginning of June and noticed this trend when doing analysis in July....

clicks for soccer balls.png

Alright, looks great.  Let's go ahead and keep that new bid.  What?  What's that?  That might not be the best thing.  Well now, why would that be?

web interest for soccer balls.png

Ouch.  That's web search volume trends for that keyword phrase.  Not so fast my friend.

Enter the newest "seedless watermelon" in the AdWords system called AdWords Campaign Experiments (ACE).  With ACE, you can run simultaneous split tests with most of the key variables in your campaigns by splitting traffic between you "control" group (original) and your experiment group...AND...you can analyze the results of your tests before you apply them to all auctions.  This lowers the risk of diving into new, unproven strategies by enabling you to control the amount of traffic you send to your experimental groups; which ultimately helps you make better decisions in your optimization efforts.  You can split your traffic in 10% intervals from 90/10 all the way to 10/90.

The cool thing about this is that if you want to run a low-risk experiment  and send 80% of your traffic to your control group and 20% to your experiment group, you can analyze the results and find if the changes performed better.  If they did, then you can run what is called a holdback experiment before you fully applied the changes to your campaigns.  A holdback experiment involves running the exact same experiment again, but this time with the control at 20% and the experiment at 80%.  This way, you confirm that the positive effects of your experiment are truly there as the experiment is exposed to a larger amount of traffic.

When you go to analyze an experiment, you want make sure that the statistical differences in your numbers is meaningful rather than the result of random chance.  Statistical significance is calculated based both on the number of auctions your campaign participated in, and on the size of the differences in metrics. Google AdWords provides icons in your campaign when the math indicates that you can be 95%, 99%, or 99.9% confident that differences are meaningful, and not just due to chance.

The icons are arrows that show you whether a particular element you're experimenting with has achieved statistically significant results, and how confident you can be that those results will carry over to your campaign if you apply the experiment (one arrow meaning there is a 5% probability your results occurred due to chance, two arrows means there is a 1% chance, and three arrows means there is just a 0.1% chance these results are due to chance).

The introduction of this new feature saves the account manager time and makes testing in your AdWords account much more accurate, efficient and profitable.


August 27, 2010





Mike Fleming specializes in Analytics and Paid Search for Pole Position Marketing, a leading search engine optimization and marketing firm helping businesses grow since 1998. Mike enjoys playing, writing and recording music along with playing basketball to get his workout in. He resides in Canton, Ohio with a girl who threw a snowball at him one day…then married him.

Mike and the team at Pole Position are available to help clients expand their online presence and grow their businesses. Contact them via their site or by phone at 866-685-3374.






Comments(2)

Hey Mike, I think I recognize you from the Market Motive forum.

I ran into this new feature just last week and the thing that struck me as odd is just how complicated it is to set up an ad test. They could have made testing ads much easier than having to create a duplicate adgroup, etc.

But it is a step in the right direction.

Keep up the good posts!

Hey Chad,

Nice to run into you again. I agree that figuring out how to do the testing in the interface was a bit confusing at first. Yeah, it seems to me that they could have just did the little beaker thing next to ads in current ad groups and let you send a certain amount of traffic to each. I'm sure there's things we don't know in terms of how the system works. I'm sure they'll iron out the kinks as time rolls along....

Comments closed after 30 days to combat spam.


Search Engine Guide > Mike Fleming > PPC Testing Made Easier with AdWords Campaign Experiments