While there are many different survey platforms which all offer a variety of ways to ask questions and implement survey logic, we should not forget about investing the time to test our surveys. These 8 tips will help you to cover all topics you should keep in mind during the survey QA process.
Optimize Team Communication
In most cases, multiple team members all work on the same project simultaneously. There are survey authors, survey coders, project managers, clients, translators, and QA professionals, all with their unique role in survey production. In most cases, these team members send emails to each other (or to only specific parts of the team) as their main mode of project communication. This can quickly become unmanageable, resulting in hundreds of separate emails, making it easy to lose focus, potentially allowing critical issues to fall through the cracks. Centralizing team communication can eliminate this chaos and allow for a more collaborative approach, positively affecting all aspects of the survey testing life cycle.
Organize your tests with a test plan
Before beginning the testing phase, try to think of all of the different possible routes through your survey. For example, if your survey uses 3 different languages (e.g. German, French and Italian), and is conducted in 5 markets/countries (e.g. Germany, Austria, Switzerland, France, and Italy), you will see 15 different possible routes through your survey (one for each language/region combination), all of which need to be reviewed. Furthermore, if you have additional routing-critical questions available (i.e. asking about a job role), this might expand your test plan dramatically. Consequently, if your job role question contains 10 different possible answers, there are different questions potentially being asked (depending on the job role answer), you could end up with up to 15×10 = 150 different routes in the test plan. Once your test plan is worked out, make sure you are coordinating with your team in the most efficient way.
Test with multiple devices
Generally, you cannot control your respondent’s choice for devices and browsers. As such, you should test your survey on at least 2-3 Desktop Browsers (Chrome, Internet Explorer, Firefox), as well as Android and iPhone. Browsers like the new Microsoft Edge 2020 version and Opera are based on the Chromium engine and therefore it is not so important to do detailed tests with them. Keep in mind that the browser tests are mainly utilized for page views and not so much for checking the logic of a survey. It would be enough to see how the individual questions look at the various devices and engines. These tests are still important, even if you use a survey system that is already mobile optimized.
There are two reasons for this:
First, customers and project managers tend to use very detailed descriptions for questions or introductions. While on the large Desktop PC, such description/introduction texts look perfect, it is a totally different view on a mobile device. Generally, question texts should have a maximum of 150 characters and there should be a maximum of 7-8 options for single/multi-response questions when viewed on a mobile device. Also, the text for those answer options should be limited to 40-50 characters.
The second reason for doing the mobile device’s test is related to the flexibility your coder might or might not have when creating surveys. It is recommended to optimize the screen layouts, according to the demands of the client. This might include images and logos which need to be positioned in a specific way. Different browsers, however, tend to render positions differently, or they squeeze things if they will not fit the intended screen size. A coder with very good experience can usually work around these issues, but not all coders know all the details of HTML and CSS inside and out. There should be some control and review of the screen design during the QA phase.
Report any Issues, making clear all details that are available.
Quite often, especially in large studies with multiple languages, a coder often does not know which question is related to a reported issue. Even if a screenshot is sent, it is not always clear. A tester should therefore always report the exact question name (e.g. Q7), a description of the issue, as well as a screenshot (if possible). Extra information, like the answers to previous questions, will enable the coder to replicate the issue. Remember, surveys can be very complex and use complicated routings. These routings often depend on answers given to previous questions. When a coder fixes issues, it is very important to know exactly how a tester was routed through the survey when they encountered the issue.
Include your customer in the test process
Sooner or later, most clients will want to test their survey, to see if all is ok. They may also decide to change things once they’ve seen their survey on a screen. Try to ensure that the client also reports issues in the same manner (e.g. specific question name, description of the issue, screenshot, specific routing details/previous answers) as your internal team. Often times, communication flows through the Project Manager, who then communicates these test issues with the coders. Without a centralized reporting system, this can cause more confusion and delays. When client-related issues are identified, ensure that the necessary team members are notified quickly and efficiently.
Double check fixed issues
Once a coder has fixed a reported issue, there should always be a re-test to ensure that the issue was fixed. Ideally, the same answers should be used that were used for reporting the issue. Only then will it be a valid test of the reported issue. This will help you make sure that all is working. Although this might require a tad bit of extra time for initial documentation, and for the repetition of a survey test, in the end, it is worth it. There is nothing more disappointing than realizing at the end of data-collection that a routing path was incorrect and you are now missing mission-critical data.
Create a final QA Report
This report should contain the final walk-thru of the test plan, which routes were tested, and how the individual questions are connected to each other. It is not necessary to show all found issues in this report but should show the final tests according to the initial test plan. Reports like these can go a long way towards solidifying relationships with your clients, possibly even creating future work opportunities.
Develop your team and classify the issues over time.
Create a classification scheme for the types of survey issues you’ve encountered. For example Layout, Wording/Spelling, Missing Rotation, Survey Logic, Customer Change Request, and many more. When issues are reported (either internally, or externally from the client), these classifications should be used. Review your database of found issues every month, to identify any developing trends (e.g. Layout Issue reports trending down, Customer Change Requests trending up), or if the same issues seem to happen over and over again. This will also give you good insights into the training needs of your team.
If you follow these 8 tips, you will be able to optimize your survey operations dramatically.
2×4 SurveyTester supports most of the above-mentioned topics using AI-based methods along with a flexible collaboration platform. Using our own Survey Bot, SurveyTester performs fully-automated initial tests, while simultaneously learning how your survey is structured. It collects all kind of information which allows QA experts, coders, project managers, and clients to concentrate on their core parts. SurveyTester will save time and money, because it can repeat tests, using the same answers as a tester did before. SurveyTester automatically creates screenshots of every survey page, using real equipment ranging from iPhones/iPads to Android devices, as well as a host of different PC-based browser combinations. SurveyTester also centralizes the error-reporting process, allowing anyone in the QA workflow to log and report issues (misspellings, routing errors, survey updates, etc.), eliminating the need for countless emails between all those involved in the QA process.