Your Global Test Could Be Limiting Your Personalization Growth
The fundamentals of running a successful global test and how brands can steer clear of misinterpreting their results.
Summarize this articleHere’s what you need to know:
- A global test compares users who see personalized campaigns against those who don’t to measure their effectiveness.
- Global test methodologies calculate the incremental value of a personalization program by comparing primary metrics.
- While global test tools can aid in communicating program value and gaining executive buy-in, the results can be noisy and misleading without proper methodology and distribution across the conversion funnel.
- Partner with a personalization provider to design and analyze your own global test for accurate, comprehensive optimization.
This content originally appeared in our XP² newsletter. Subscribe here to receive experience optimization insights like these, straight to your inbox.
To measure the impact and communicate the value of a personalization program, the global test has emerged as the gold standard for personalization and experience optimization teams—and with good reason. With many personalization and testing providers offering a tool to help track results and action them for optimization, teams have come to rely on running a global test. However, teams that don’t fully understand how to correctly interpret and use a global test run the risk of flying blind, stymieing their own personalization growth in the process.
In a world with shrinking budgets and growing acquisition costs, what is the best way to unlock resources and grow personalization programs with a global test? I asked Hannah Solmor, Customer Success Manager at Dynamic Yield by Mastercard, to dive into the many nuances of running a global test and how to avoid common pitfalls.
JR: To level set: What’s a global test anyway?
Hannah: A global test is a holdout group who are ineligible to see any personalized campaigns. Measuring the effectiveness of an experience, like any scientific experiment, requires a control group. Like an A/B test, a global test enables a direct comparison between two groups—in this case, users getting a personalized website experience, such as banners and targeted product recommendations, and those seeing the static website. Additionally, it allows personalization teams to keep a pulse on the health of their total program and identify when they need to pivot, optimize, or double down on a certain strategy.
How can you use a global test to communicate the value of a personalization program?
Most methodologies use a simple formula for calculating the incremental value a personalization program generates. It compares the uplift in a primary metric per user (MPU) against those seeing the control group:
(MPU P – MPU C)* Users P
While revenue is the most common primary metric for companies, one can apply this approach to any metric and find its incremental value: purchases, add-to-carts from product recommendations, and even page views. These metrics translate to value for the organization—be it through an uplift in conversion rate or a boost in revenue. And once shared across teams or reported on as part of a brand’s quarterly business review, they can also encourage more executive buy-in.
What are the limits of using a global test tool?
Global test tools are susceptible to noise, and results can be misinterpreted or misleading if your team doesn’t also have properly supportive technology and methodology around it. If experiments are highly localized or very targeted, the impact might be hidden by a low signal-to-noise ratio. For example, if a campaign isn’t adequately spread across the funnel for conversion rate optimization (i.e., high-traffic, low-impact pages like the homepage as well as low-traffic, high-impact pages like the cart), it might be that a user is unlikely to see at least one personalized campaign during their session.
The best way to triage this is to remember that a global test is a high-level tool, not a breakdown of results at the individual campaign level. Imagine it as a soup and each individual campaign as an ingredient or spice. Looking at the soup might not tell you that there’s too much salt or the veggies are undercooked, but tasting each individually is how you ensure the sum is greater than its parts. The same is true of a global test—individual campaigns need as much scrutiny as your personalization program in its entirety.
What recommendations do you have for implementing a global test that captures some of these nuances?
First off, partner with your personalization and testing provider to set one up. They’ll help you identify your primary metric and goal, design the perfect process and right implementation for your program, and consult when you’re rolling out nuanced campaigns.
For example, there might be a time when you want to ensure that optimization efforts are accurate and effective at both global and local levels. Your personalization provider can help you set up these tests and analyze performance at the campaign level, enhancing unique personalization experiences, while also creating an optimized, holistic site experience.
In all, while a global test is a necessary tool for personalization teams, you need to supplement it with methodology and a technical partnership to really make sure you’re continually identifying what drives results and optimizing your personalization program’s growth.
Your Global Test Is Crucial to Measuring the Impact of Your Personalization Program
It’s clear that a well-executed global test can deliver sharp, high-level insights into your personalization program’s effectiveness. That’s why it’s important to work alongside personalization experts as you implement one. For brands that lack the expertise to properly craft their own global test, Dynamic Yield has its own best-in-class global test tool, the Experience OS Impact report, that can help them quickly measure the overall impact of their personalization program. Remember: Relying solely on global test data without a nuanced approach and the proper support can lead to misinterpretations that limit your program’s growth.