he days of time-consuming and high-cost quantitative research are long over. Today, we have the ability to run tests at high-volumes using automated platforms to collect the data brands need to make decisions at the fast pace they require.
While this allows consumer data to get into the hands of businesses much faster and more cost-effectively, it also means a lot has changed in the world of traditional survey research.
In this article, I’ll walk through the three main changes I’ve seen in quantitative research and what you can do with your data since the world of automation has taken charge.
1. Results are in! (but without a middle man)
Leaning heavily on automated testing removes the human layer in between you and your results — so you get data faster and more directly without someone in the middle.
This is obviously a positive benefit of automation, but it also means that you have to do your homework to understand the provider’s approach to data quality. In this automated world, you are always the first person to see your results — your vendor does not have someone to validate everything before passing it to you. So data quality practices become even more important.
But it’s also worth noting that in a world where you can run research faster and more cost effectively, you will likely end up running even more research than ever before. And that’s great. But without a person in the middle to see the results before you, it’s important to remember what 95% confidence really means.
With 95% confidence, we expect to get the same result 95 times out of 100. But 5 of those tests will produce anomalous results. So when you’re doing high-volume testing in this more cost-effective environment, it becomes a lot more likely you will encounter anomalous results — that’s just statistics. And without someone in the middle to catch that, it’s up to you to pay attention.
But of course, when you’re running a lot more research, you’re more likely to spot when something doesn’t line up with what you’d expect to see based on other tests you’ve run. And you should also invest in multiple rounds of testing to get a better overall picture of how consumers are truly responding to your stimuli, detect anomalies and make sure you aren’t missing a major issue within your messaging.
Statistical anomalies are an inevitability with high-volume research. But the more research you do, the more adept you’ll become at spotting them.
2. The power of meta analysis
With an automated research platform, you not only have the ability to run more research than ever before, you have the ability to analyze it all in one place. That gives you the ability to conduct your own meta analysis (or connected learnings) — so you can compare data across multiple studies to find trends.
For example, as a large brand, you may want to see how your last few Super Bowl ads have performed and whether there were certain patterns to derive from what you’ve tagged (ie. whether a celebrity was in the ad or specific call-to-action) to find a winning combination for the next one — saving you millions while also giving you a better understanding of your consumers.
And as an added bonus, by knowing the trends in your data, you’ll be well-equipped to spot any anomalies that occur in your quantitative research much more easily (see my point above).
Of course, tagging and reviewing trends in your data can be a time investment, but it will allow you to make better decisions based on many data points over time — quite a worthwhile investment in the long run, if you ask me.
3. The sky’s the limit when it comes to experimenting
If you’re passionate about the nuts and bolts of methodology, I encourage you to always experiment.
And with an automated testing platform, the sky’s the limit with what you can test. You can run tests on tests, comparisons on word choice or the order of your survey questions, cross-check survey responses by regions, the list goes on!
We run this kind of experiment all the time. In one case we looked at whether the numeric scales (5-point scale, 7-point scale, etc.) within surveys had an effect on the responses. Interestingly enough, we found that this does not seem to have any radical effect, but what did was the structure of how the questions are formatted.
For example: If a question is leaning towards an agreement statement, the respondent can easily interpret the answer you’re hoping to get, which could heavily influence how they choose to respond — leaving you with an answer that may not be genuine.
By experimenting, you’ll gain a greater understanding of the data you’re collecting and how to structure surveys in a way that leaves less room for error — ultimately giving you more confidence in the quality of your data.
Final thoughts
In the fast-paced world we live in, automation has opened a lot of doors when it comes to conducting survey research and the learnings you can get from your data.
If you’re interested in learning more about how we tackle data quality within our own platform.
And if you’re in search of a solution that provides fast, reliable data you can share with confidence, learn more about R4I