Author: Nick Rolfe
Reading Time: 5 minutes
Working with companies small and large to help establish or manage A/B testing and optimization programs has shined a light on a few common pitfalls of organizations new to testing. We understand why it happens…your site owner finally gets the go-ahead to create an experimentation practice within your company. The path of least disruption will always be to hire or appoint someone to build an independent team to run it (especially within large companies). With your existing site managers and production team already backlogged and a need to show progress, the testing show goes on in isolation.
Is this really the best way to drive results from your investment? In the immediate term, maybe, but it’s likely that as your program progresses, this question will come up more and more frequently, so why not just get started on the right foot and avoid having to force a retro-fit when (not if) it’s required. The reality of making these changes up front is that your business will get smarter and more efficient, leading to shift in how your team spends their time.
Some testing service providers may talk about how and why their “process” for test ideation through test analysis is superior to all others, but the truth is: process is the easy part and it will only carry you so far. The real value comes through internal organizational evolution and requires a commitment to change. Focus your time and effort in the following two areas to get ahead of the game:
Build a Testing Culture. The single most difficult task of establishing a testing program is organizational buy-in and participation. Everyone that is involved in site content should care about testing and want to get involved. We find this is most effectively enforced from the top down. Encourage your leadership team to ask questions like “has this been tested?” and “what tests are planned to address this issue?” Testing should have a strong position in any monthly/quarterly business reviews to both roadmap optimization and learning opportunities as well as share statistically verified customer insight and performance results from previous tests.
Equally as important in driving buy-in is championing your testing efforts through broad communication like newsletters, quarterly/semi-annual reviews, and concise documentation and communication of customer insights and business outcomes generated from testing.
Without buy-in, your isolated testing program will eventually run out of high-impact test ideas -- or worse, will struggle to maintain funding/budget to support high-impact testing.
Action your Analytics. Creating a testing practice should be viewed as a performance upgrade to your existing web/marketing analytics program. As such, the two should be integrated as closely as possible. Insights and takeaways from your analytics program are, in fact, hypotheses. Though some have higher certainty than others and may not warrant testing, experimentation allows you to dismiss speculation. Feed those hypotheses from your investigative analysis into an iterative testing plan to both validate hypotheses and implement solutions all at once. This connection between analytics delivery and test ideation will ensure a healthy pipeline of tests focused on priority business outcomes and create an engine for learning and optimization.
In summary... Building a sustainable testing program isn’t about checking a box. It’s about long-term success and won’t survive on financial investment alone. It will require a commitment to organization-wide change. If unwilling to take this leap as a team, building your testing program in a silo will lead to its questioned existence and only make it more difficult to get the approval you need when you decide to do it right the second time.