Over the past decade, ad testing has, for the most part, been a simple A / B test. Change messaging or URLs and see which ad performs best through to 50/50 split serve. Then after enough statistical significance, see which ad performed best based on the CTR and other KPI metrics like conversions, conversion rate, and so forth.
Now, with the continued implementation by Google, Facebook, and other platforms, A / B testing has become a thing of the past. No longer can we run a simple split test. The new AdWords recommendation is 5-7 ads per ad group and let the Machine tell us which performs best by which one is serving more often due to the performance indicators.
First, I am saddened to see simple A / B testing become a thing of the past? Of course! But, I am also excited for this next step in account management, strategy and tactical optimizations. In the end, it means I have the opportunity to focus less on my gut and feelings and more on the specific callout the machine is making and take that to the client.
So, today, lets talk through a scenario and the new approach to how we can test ads with ad optimization being done for us.
The scenario
With Google now optimizing ad copy based on the algorithm, using the KPIs and quality / relevancy, conversion performance, and other factors, it has become more difficult to do split testing, as AI and Machine Learning take over. Obviously, an ad experiment under variations and tests can be done, but if you want to start letting Google help determine ad performance, start looking at how Google uses the ads and adjust based on that.
In this scenario, we are looking at an ad group with 4 ads. Ad # 1 is serving 0.9% of the time, Ad # 2, 87.7%; Ad # 3 – 7.7%; Ad # 4, 7.7%.With this information, we start to break down why Google has started to show ad # 2 most often, followed by Ad 4. ??Then, make decisions for Ad # 1 and # 3 for ad testing.
…Para leer más, siga el link del idioma que prefiera
Tags: anuncios, adwords, google, facebook, kpi, ctr, ads