Measurement matters. Even the most basic survey tool lets you measure responses using a ranking or rating scale. For instance, it’s common to use satisfaction or importance scales (e.g. “very important” to “very unimportant”) or recommendation scales (“How likely are you to recommend us to a friend?”).
Good to know: Sometimes only frequency matters, but data analysis often benefits from applying weighted scores to answers during the design phase.
A typical survey includes some kind of qualitative assessment. That is, you ask the respondent to grade a product, experience, or feature on a scale you provide: How important is this? How satisfied were you? How likely would you be to buy this? And that works great, in the large majority of situations.
Generally, when we conduct a survey we’re looking for an overall perception: How happy are our customers, and what can we do to make them happier? It isn’t always necessary to fuss with the details. Keep it simple!
However, in some circumstances, you can make better decisions by weighing the answers.
Perhaps the best explanation is an example used in The Increasing Problem With the Misinformed, which (among other things), uses PolitiFact data that measures the truthfulness of politicians’ statements. The author, Thomas Baekdal, initially graphs politicians’ accuracy using an ordinary rating scale. But then he points out: “…By ranking the data like this, we don’t take into account the severity of the lies a person makes. A person who made 10 small lies will be ranked the same as a person who made 10 big lies. Both are obviously bad, but we should really punish people in relation to the severity of their lies.”
Instead, he uses a system based on a logarithmic scale:
We give “half-true” a value of 1 and center it on the graph.
We give “Mostly True” a value of 2, and “True” a value of 5. The idea here is that we reward not just that something is true, but also that it provides us with the complete picture (or close to it).
Similarly, we punish falsehoods. So, “Mostly False” is given a value of -2, and “False” a value of -5.
Finally, we have intentional falsehoods, the “Pants on Fire,” which we punish by giving it a value of -10.
My point is not to highlight anything about politicians or the media, which is what most interests Baekdal, but rather to show that sometimes one answer is more important than another, and your data analysis should reflect that.
A few more typical business examples:
- You may assess a scholarship candidate on several criteria. While all are important to decision-making, some matter more than others: essay quality, financial need, diversity factors, geographical considerations.
- Choosing which employee gets a quarterly achievement award should factor in attendance record, but that’s (arguably) less important than skills learned, project tasks completed, or community service.
- Perhaps you’re asking customers to provide feedback on new features you’re considering adding to your product. If you ask them to say whether a feature is mandatory, desirable, or optional, you may want to give greater urgency to the mandatory items.
Perhaps the best known example of weighting responses is the Net Promoter Score, a 10-point scale which measures the key customer service question: “How likely are you to recommend [brand] to a friend or colleague?” While it’s nice to look at the numbers as a regular chart, customer insights are judged by weighting the results: The Net Promoter Score is the result of the mathematical formula subtracting the percentage of Detractors (score 0-6, unhappy customers) from the percentage of Promoters (9-10, loyal enthusiasts). In addition to giving you a different view to the data – not thrown off by “Lots of people think we are average” – the NPS score is accepted across the industry, which means a company can use it as a customer service benchmark.
Those weighted scores also can help you with longer-term trends and to gauge progress. For example, if your HR department does a yearly employee satisfaction survey, the overall weighted score might be, say, 81 on a 1-100 scale. Next year, if the overall score is 69 or 89, that helps you judge if the employees are more or less engaged.
Sold? Here’s how it works.
Some survey tools let you export the raw data to Excel, where you can massage it yourself, assigning a higher weight to “must have” features than to “nice to have” features. However, the whole point of using software is to make our lives easier – so this is something built in to SoGoSurvey (for SoGoSurvey Pro and above). Let me show you how easy it is to use it.
If you have used SoGoSurvey at all, you probably are familiar with the standard question types, including radio buttons. This is a typical question, along with its answers, in which each answer has an equal weight:
However, imagine a scenario in which the survey is a job application. We are the HR people looking for someone to be a long-haul truck driver, so we value candidates who are willing to travel. A employee who’s happy to spend all the time on the road is most attractive to the company, while someone unwilling to travel probably won’t be happy.
But this isn’t the only factor; there are others to consider. Let’s make it easier for the hiring manager to pick the best-qualified people from the stack of applications.
So instead, let’s choose a rating radio button as the question type. SoGoSurvey permits you to type in a number for the weight to give each answer:
You choose the numbers that help you judge the response most accurately. Here we give a positive number for the “Yes, we want this person!” answers, and negative scores for the responses that are hiring warning signs. But we could as easily have weighted the candidates on a 1-10 scale, or given 5 to the acceptable answers and 0 to the rest. It’s wholly up to you to assign the weight and their significance. Just keep in mind: The weight should reflect the urgency of each answer.
You can use any number of rating questions in a survey. If there are five such questions in this job application (another might be, say, the number of years of experience as a truck driver), each with its own weight, the ideal candidate might score 60 points.
None of the questions completely qualify or disqualify someone, but the overall score would. You may decide that anyone whose weighted answers add up to 40 or more is a good fit, or at least worth a job interview.
To the survey respondent, the weighting is invisible. They see only the question.
You, however, see the difference when you look at the reports. Weighted scores and averages are provided in several reports:
- Bar graph
- Frequency report
- Advanced frequency report
The bar graph report shows the weight for each answer and the score overall for the question. Compare the results here, with the unweighted results at the top and the weighted results below. Visually the bar chart looks the same, but it’s hard to discern the overall sentiment. With the weighted results, you can see that the overall response is 5.83. That gives you something by which to judge the typical job applicant (at least in regard to travel willingness).
Also note: people who don’t respond to the question are counted as NULL; their answers are not incorporated in the statistics.
Another way to see the difference in responses is in the simple frequency report. In the results from the regular radio button, you see a basic table of results. But with the weighted answers, you see the average as well as the weights that were applied to reach that number.
You can dive a little deeper with the Advanced Frequency report, using the additional controls available.
Request a Demo