📘 This article provides an overview of PageWorkers' split testing feature. PageWorkers is part of Botify's Activation Suite, available as an option with all Botify plans.
Overview
PageWorkers incorporates SEO split testing to help you evaluate how search engines respond to new ideas for SEO growth. Split tests enable you to robustly estimate the impact of your ideas on clicks, impressions, and click-through ratio (CTR) in just a few clicks, without any external resources. You can easily prove the value of your ideas to other stakeholders with built-in statistical analysis and performance reporting. PageWorkers split testing can guide and accelerate your SEO strategy by ensuring you develop only the best ideas, mitigating wasted resources on ideas that have no positive impact on ROI.
Understanding Split Testing
The goal of SEO split testing is to increase organic traffic. A split test evaluates two page template versions to determine which version works best. Using two randomly selected groups of pages, one group is optimized (variant group), and the other is not changed (control group). The test runs long enough for Google to discover the optimized pages, and then Botify analyzes the results to determine if one group performed significantly better than the other. While we cannot predict 100% success since there are always outside variables, a split test with statistically significant results typically supplies data that supports you are moving in the right direction and is, therefore, strong evidence you can safely deploy your test optimization to your full scope.
Difference Between Split Testing and A/B Testing
While SEO split testing is similar to A/B tests, the significant difference is that A/B tests evaluate performance on two versions of the same page, while a split test evaluates two groups of pages to determine the SEO impact of the modification on one group versus an unchanged group. Split testing evaluates performance based on bot activity first, which results in user activity; A/B tests only evaluate performance based on user activity.
How it Works
When creating a Page Editor optimization, you can run a split test before deploying the optimization to the full scope. You determine what to test based on your SEO goals (e.g., the impact of a change to the title, description, or H1 of a group of pages) and define the percentage of pages to test in the scope of your optimization. When deployed, PageWorkers randomly determines which pages to optimize vs. which to preserve as the control group when delivering the optimization. After 30 days, PageWorkers will run a statistical analysis of the results on the number of clicks gathered by the control group vs. the variant group to determine if the difference is significant. Split testing reports provide traffic insights throughout the test period.
Requirements
You must have enough pages of the same template (generally, at least 300) with sufficient traffic (at least ten clicks daily in the previous period).
Split testing must be on a Page Editor type optimization applied to your entire site or URLs matching specific rules. Importing a specific list of URLs for a split test is not currently supported.
You can only conduct one split test per page template concurrently.
How Results are Calculated
Before the test, PageWorkers evaluates the page groups and triggers an alert if a disparity between the control and variant groups is detected since it will likely produce unreliable results. During the test, PageWorkers collects the click count of every page in each group over the test period.
When a test reaches the analysis phase (after 30 days by default, but may be extended to 60 days), PageWorkers performs the following analysis, excluding the pages receiving the top 5% of clicks to avoid the most popular pages causing a skewed view:
Compare the variant and control groups to determine if they are significantly different. This is accomplished using the daily average number of clicks per web page for both the control and variant groups.
PageWorkers compares these daily averages to determine if there is a significant difference between the number of clicks in the control group versus the variant group each day.
PageWorkers conducts a Student’s T-test to determine whether any noted differences are due to chance or are meaningful. If the test shows a significant result, there is a meaningful distinction between the two groups, and the difference is unlikely due to chance. The difference may be attributed to a natural fluctuation if the result is insignificant.
Interpreting the Results
Use the split test results to guide your decision to deploy the optimization to your full scope. The following are the possible test results:
Positive: The variant group’s performance was better than the control group, so you can confidently deploy the optimization to your full scope.
Negative: The control group's performance was better than the variant group, so you should consider testing another version of the optimization.
Running: The test has more time before the analysis is available (30 days by default, but it may be extended another 30 days if needed).
Inconclusive: No significant discrepancy exists between the control and variant groups. You may need to investigate further before determining whether deploying the optimization to your full scope will provide the expected ROI.
All tests are valuable and provide actionable data, including those with negative and inconclusive results. A negative result can prevent you from wasting resources on optimizations with no ROI or those that could potentially be damaging. Consider a test that evaluates removing a text string in your page description. An inconclusive result shows no significant difference in metrics, which can assure that removing the text will unlikely affect your business goals.
Best Practices
Refer to the following guidelines when planning your split tests to realize the full benefit of this valuable feature:
Ensure there is a business case behind your hypothesis.
Confirm you are testing at the page template level.
Do not evaluate test results by traffic alone since variables such as seasonality can easily skew the results.
Test only one hypothesis for each optimization since PageWorkers will report the overall impact of the modification; it cannot provide the individual impact of each modification if two or more are combined.
To mitigate external causes of differences between variant and control groups, check to see if another active split test exists on the same pages you are testing before you start your test.
See also: