We use online surveys on our website to learn our users and their goals, identify areas for improvement, and measure user satisfaction. Also, by surveying users, we can test content, information, architecture, visual and interactive design, etc. However, surveys are not always the best tool. Sometimes the data (meaning feedback) may be obtained through other resources and approaches, such as interviews (formal/informal), census figures, observation, and etc. We prefer to use a survey when we need a quick and efficient way to get information, reach out to a large number of people, get statistically valid data about a large number of people, and this information isn’t available through other sources.
There are 3 types of surveys:
1. Case study surveys
provide only specific information about the group
collect information only from a part of a group
2. Sampled surveys
ask a sample portion of a group to answer the questions
the results for the sample reflect the results of the entire group
3. Census surveys
go to every member of the group
give the most accurate information about the group
are not practical for large groups
We don’t conduct census surveys at Change.org. The most common and practical approach for us is running a sample survey. Sometimes we also conduct a case study survey, but it depends on the research goal and audience.
The goal and audience
Defining the audience and the goal is one of the most important stages. It defines the survey structure and its form. First, we consider the purpose of research and then the target.
Focus on survey goal first: what are the major decision points or areas of uncertainty in the usage of the product? Focus on those areas and clearly define what you hope to discover. Inform or assign the stakeholders in the development of the survey and set up a deadline.
The next step is clarifying who has the answers to your questions. Which specific group? The audience defines your survey type. The most complicated stage in conducting the survey is to compile your sampling frame.
The Sample (or sampling frame) is the group of people who is selected to be in our research. It’s hard to set up its size. A sample size should be large enough to represent the overall group. It should be proportional to the group. (Sometimes statistics agencies use a sample size calculators to figure out how big the sample should be). A general rule is that the larger the sample size, the more accurate a reflection of the whole it will be.
Caveat: bias and self-selected responders should be consider as a separate group. Otherwise, the surveys can cause more harm than good.
And, don’t forget about margin of error and standard deviation! In analyzing the results, we should consider the margin of error in a sample. When in doubt, go find yourself a statistics nerd to help talk you through these details.
Margin of error is a confidence interval. When we conduct a survey, we’re making an assumption about what the larger population thinks. If a survey has a margin of error saying 5%, it means that if you ran that questionnaire 100 times — asking a different sample of people each time — the overall percentage of people who responded the same way would remain within 5% of the original result.
Standard deviation - the amount of variation or dispersion from the average (or how far from the normal). A low SD indicates that the data points tend to be very close to the expected value. A high SD indicates that the data points are spread out over a large range of values.
To summarize: defining the audience and the right user segmentation for the survey may be complicated, because you have to keep in mind a lot of different terms to make your sample correct and reliable. The main rule is to remember that asking more people in one survey helps reduce the margin of error, and analyzing the multiple forms gives a more accurate view of what people really think.
Gathering questions and compiling the survey
Before gathering questions and compiling the survey remember an important rule of human nature: the more questions you ask, the fewer respondents who start a survey will complete the full questionnaire. Research shows that the survey works best with less than 35 questions (source). Talking from our experience, the best response rate we got from the surveys with less than 12-16 questions. However, the most important criteria is not the number of questions. It is the total time users spend on filling the survey. It should not exceed 2-3 minutes.
There are many guidelines on how to formulate questions and how to lay out surveys to make them “respondent friendly”. The answers from user surveys must be relevant to the issues that are important. Therefore, if you can get the data from other sources (internal and external), avoid asking overlapping questions in the surveys. Create a survey that asks the right questions to meet your research objective. Make the form short, simple and clear as possible. Follow the “4 Be” rule: Be Brief - Be Objective - Be Simple - Be Specific.
No matter the main survey purpose and issue areas, in every our survey we want to cover the User Experience Honeycomb (by Peter Morville), which covers user experience (UX) basics. In order to provide a valuable and useful UX, the content on our website must be: useful, usable, desirable, findable, accessible, and credible. Therefore, we ask a block of questions like How clear are the guides and instructions on our website?, How difficult it is to navigate at the bottom of each our survey.
We also try to follow a few general-accepted rules:
Use closed-ended (yes/no, multiple choice, rating scale, etc) questions rather than open-ended (comments and essays). However, keep a few open questions. Closed-ended questions give you the better response rate, however open-ended provide the more valuable feedback. So, you have to juggle a bit here.
Consider what questions to use and when it is appropriate to use them. The sequence of questions should help to create a certain flow through the survey. I prefer to ask interesting questions in the beginning of the survey to catch the respondant’s attention. According to our observations, if a user started to answer the survey - he/she will the most likely make it to the end. Leading with interesting questions may help to stimulate interest. The most answered (the highest response rate) or interesting questions on Change.org include: Do you use your real name?, Why did you start your petition?, Would you create another petition in the future?, Would you use “I don’t support” button to some campaigns?
Develop the concept of trust between yourself and the respondents. Maintain a neutral tone in all your questions. Notify users how much time it will take to complete the form and how you’ll use the results. Stay away from assumptions, personal statements, and complicated question structures.
Avoid leading questions, loaded questions, jargon or specific internal terminology.
The common practice is to place demographic and/or other sensitive questions at the end of the survey. If they are in the beginning, participants may opt out early. However, we tested a few surveys with different demographic block placement, and it performed much better (and the survey overall) being placed at the beginning.
Group similar questions together or in the same area of the survey. At Change.org we create different pages where we post different blocks of questions. The result is a demographic block, close-your-campaign block, start-your-petition block, etc. We name them accordingly: “Tell us about yourself”, “Your petition on Change.org”, “Please, share your experience with us”. When users see the title and a new page with questions, they can prepare themselves for a certain theme or topic.
Ask if a respondent is willing to answer more questions or provide additional feedback.
There are a lot of rules and techniques how for compiling your survey in the most efficient way. I summarized only a few important rules.
Finalizing and sending the survey
Customizing the survey design according our brand and theme makes the survey look professional. The right color fonts and the background make the survey visually appealing and user-friendly. Think how you going to spread the survey:
as a weblink shared through email or social media
embedding the survey into the website
invitation popup asking to take the survey when someone visits a specific page on the website
a survey window popup, containing the survey when someone visits a specific page on the website.
Each one of the forms above has its own specific, advantages and disadvantages. For instance, the survey window popup is the most convenient survey form, but works best only for a few short questions. The embed survey form looks very nice, but it might have its own formatting or interactivity limitations.
It is fairly common to offer incentives for completion in order to drive the participation rate up. A lot of companies use the subject line to emphasize the benefit to customers. Tempting subjects like “Get your holiday free” or “Win your chance..” instead of “Can you fill out our survey?” improve the response rate. We don’t offer incentives, and prefer to emphasize that the survey is easy to fill out (“it only takes about 3 minutes”). In the absence of incentives, implying that the cost or effort is very low can have a similar effect.
Before sending out the survey, we want to make sure the survey design and settings are working as expected. Therefore, we test the survey before it goes live. A small sample of test respondents (10% of sampling) can help verify if the survey is working properly. If something goes wrong on this stage it’s easy to revise and edit questions and/or the survey design. At this stage we can start analyzing results, revising or correcting the questions. If there are often-skipped questions, we can rephrase or get rid of them.
The final full send goes out to all our sampling list, excluding the first test send. At this stage we can’t revise the survey or change its format. Usually, over 70% results come in the first few days after the full send. We wait a week for the final and official survey analysis. That being said, some results can come even in a few months if your survey form isn’t expired.
No matter what tool you choose for conducting the survey, you should be able to share and export results in different formats. First, we share results with the stakeholders, and then disseminate them with the rest of the teams. Once we have gathered the data, we can use it to evaluate the UX of the website, recommend improvements, discuss the recommendations, and etc.
In general we can conduct an online survey at any stage of the development process. We’re using the two main methods of conducting a survey, which depends on the long-term or short-term survey purpose. Once per year we launch a broad and global survey to cover mainly product questions and to ask about general experience. In addition, a few times per quarter we can survey a very specific set of users with only a few questions.
One of the most important survey outcomes is the establishing a relationship with our users. The next time we conduct the survey, the results should be better, because the best response rate comes from the audience that knows you.
User Advocacy Data Specialist