Turn the “Yes, But” of Testing into “Yes, And” for Online Testing

Turn the “Yes, But” of Testing into “Yes, And” for Online Testing

Testing. The mere mention of it sends many in our industry running for the corners and shuddering at the thought. Counter to the deep culture and discipline of testing offline, bad experiences with complex, failed approaches, inconclusive data, inactionable findings and a variety of other “traumas” have led to a haphazard overall industry attitude towards the value and importance of testing in online campaigns.

But testing is what allows us to learn, evaluate, grow and iterate our programs and relationships with our supporters. It not only proves what works better, but frequently, it is a great tool to disprove long-held internal assumptions, e.g. “our donors don’t like XYZ, don’t respond to ABC.” Testing gives organizations data, rather than folklore, to make decisions about campaign direction and how to reach our goals.

So here are some ways to overcome the common obstacles (“yes, buts”) we hear when working with our customers.

“Yes, but….we don’t have a big enough file to do valid online testing.” This can be a real problem: you can’t get a valid data set or you have to run a test for two years. So how do we solve it? Consider changing your measurement metric. Maybe it would take two years to get to a valid conversion rate, but then the metric for you should really be open rate, or click rate, or some other earlier step in the measurement process with more response volume that can give a good indication of which design, offer, ask or registration form supporters prefer. If you are only getting 15 donations on average to your email asks, you will never draw any conclusions about what donors do or don’t like about the form, but if you are getting 100 people to see the form, then you are well on your way to being able to test it. The key to changing this “yes, but” is adjusting your target measurement benchmark.

“Yes, but…testing doesn’t work for us.” What does that mean exactly? Have tests you tried in the past not worked because the test did not beat the control? If so, that’s ok — that is the point of testing. If we knew the answer ahead of time, we wouldn’t need to test; we’d know what to do.

Or is it that your test literally didn’t work — something went wrong with the technology or the approach? That can be a real factor in testing hesitance and a valid one. Perhaps try modifying the approach: simplify the test or narrow the reach (e.g., instead of trying to segment your file 15 different ways to understand the impact on every possible donor permutation, just try it on one small group first) or maybe even change the testing tools you are using. Speaking of tools…

“Yes, but…it’s too hard/complicated/time intensive to set up, execute and track an online test with our platform/ CRM data tracking needs/source-code-doesn’t-fit-online-appeals structure.” This one can be a doozy. If it takes weeks and weeks to set up a simple test, then of course you are going to avoid it. I would too. If it’s impossible to get the testing audience out of your CRM and into your online platform in a timely manner, then yes, we can’t stick our head in the sand about a simple “solution” to these. But, there may be workarounds or ways to test that are “close enough” to the perfect testing set up. Technology and platforms have come a long way. Google Analytics and Google Universal Analytics (which everyone on analytics will be migrating to eventually) have made setting up tests increasingly easier, while allowing for more robust tracking. Maybe you can’t test to your email audience, but you can do a test for your web traffic.

“Yes, but…even putting together an A/B test for a registration or donation form is cumbersome.” Consider going outside your platform for tests. There are platforms out there, like Kimbia, that make donation form testing simple, easy and efficient. This may also have the positive side-effect of spurring an internal discussion of whether the tools you are using are the right ones.

“Yes, but…I’m secretly really bad at math and don’t understand this whole data thing anyway, and I’m a marketer/fundraiser, not a statistician, dang it!” Yep. I hear you. I was there. I still spend 20 minutes calculating the tip on a bill and am terrified of *math*. But here is my question to those of you like me who got a degree in humanities to avoid math and are now finding themselves staring it down: is being scared more important than being good at what you do and doing the best you can for your organization?

How do you know if your agency partner is showing you valid data, if you don’t understand what makes a valid data set? How will you know to troubleshoot if you don’t understand key inflection points that can happen? And, how will you be the guide to higher-ups who look at summaries of data and may make wrong conclusions if you don’t feel confident in the data yourself? So don’t run away from it. Embrace it. There are tons of tools online ready to help those of us who don’t do fractions and probabilities in our heads. Tools to tell you how big of a test audience you need. Tools that tell you whether or not your results are significant. Tools that explain what significance is, why it’s important and why it is just a coincidence that you received five versus seven responses to your test four times in a row. And so on.

So let’s start changing all those “yes, buts…” to “yes, ands.” Let’s start learning about our supporters and using that information to continue to make an impact to our missions, communities and supporters.

Miriam Kagan

About Miriam Kagan

As senior principal at Kimbia, Miriam Kagan works with clients to drive superior program and fundraising results and embed best practices into all program aspects. Her passion is helping clients use data-driven insight to inform decision-making. With over a decade in strategy roles at companies including Merkle, Convio, and Blackbaud, Miriam’s experience spans a broad variety of nonprofit clients, spanning the health, social and human services, and animal welfare verticals. In her free time, she is also obsessed with everything social media and mobile, and is always wondering: “what’s next” for fundraisers. Prior to nonprofit strategy, Miriam also worked at leading companies including AmericaOnline and The National Geographic Society.

6 comments

  1. Testing all too frequently takes a backseat when pushing materials out the door. Reasons for this range widely, such as a lack of time before launch or even prioritizing staff intuition over metrics. Don’t miss out on maximizing performance and impact!

    Thanks for the great info, Miriam!

  2. Susan Kenna Wright

    Most of us work at nonprofits because we want to have an impact. Testing is a great way to be sure we are maximizing our impact.

  3. It’s all about baby steps! I agree with Bryan that figuring out some small thing to test is preferable to no testing at all. Great tips and encouragement!

  4. Brooke Belott

    I love this idea of “data, rather than folklore”…there are a lot of ways that an organization’s history can cloud possible steps to improvement, and the tools described here are great places to start!

  5. I was always annoyed by “stats” talk until I really saw how it could improved my pages and experience. It took a little to get comfortable with, but it was worth it. I found even getting started with a very simple test and really understanding the results is much better than not doing anything. Thanks!

  6. Google Analytics is often a simple, yet overlooked solution to use when testing websites. That is always the first thing I install when creating a website, now I just need to get in the habit of using it to track where my visitors are going and what is and is not working well for me website.

    Thanks for the great tips, Miriam!

Leave a Reply

Your email address will not be published. Required fields are marked *