“How likely are you to recommend this blog?”
“What is your age range?”
“What brings you here today?”
Just like any normal conversation, a well designed survey flows smoothly rather than jumping around awkwardly. Unless you’re sending out a one-question survey (Do you like me? Check yes or no.), you need to make some decisions about how and when to ask which questions and what answer options and question types to use.
In the first part of this series, we considered the benefits of this survey-as-conversation model, as well as how to start a good survey. Now, we’re ready to tackle the next part — the middle.
The merit in the middle
Whether you think of it as the body of your content or the cheese in your sandwich (vegetarians avoid meat even in metaphors), the middle of your survey carries a lot of responsibility. This is the stuff you came here to handle. These are the key questions you need to ask in order for the results of this project to be at all worthwhile. No pressure.
Because of the importance of these items, this is probably where you started your brainstormed list of questions in the first place. Brainstorming, collecting ideas from research, and checking in with members of both your target audiences (survey participants and report recipients) can help you develop a full list of ideas.
Not that this has ever happened to you, but being underprepared for a conversation can sap your confidence and zap your chances at success. When making important phone calls for various service projects or school assignments, I remember having to write down everything I needed to ask — because otherwise I’d hang up too soon. The same can be even worse in person — imagine running out of steam when you’re halfway through a conversation… and then just standing there awkwardly. At some point, best case scenario, you or your interlocuter will probably just shrug and walk away.
Where is all this wandering taking us? Take your time in planning. If you don’t have enough good ideas to start with, you’ll end up with a half-baked conversation and a half-useful data set.
Metrics and more
When you start to consider the must-have items for your survey, consider the conversational model. If you could only ask someone one question in a particular situation, what would it be? If your friend tells you they saw a movie, you ask how it was. If a family member has been unwell, you ask how they’re feeling. If a colleague attended a workshop, you ask if it was useful. This core question is your key metric — measuring what matters most.
In the world of customer experience, touchpoint surveys prioritize a few CX metrics — standards like Net Promoter Score, Customer Satisfaction, and Customer Effort Score. Using these same exact questions over and over again allow you to compare the same priority across multiple customer interactions. If you find that customers are really satisfied when they sign up but not so satisfied 90 days in, you’ve got work to do. CX surveys can be super short — sometimes just a single question or two — which means that getting “the middle” right is critical.
In the world of employee experience. a similar measure is employee pulse. Over time, you’re asking the same key question (Internally, we use the highly conversational question ‘How’s it going?’.) to the same group of people to get a quick take on any changes. Like CX surveys, employee pulse surveys may be super short. Be sure you’re satisfied with the question you’re asking because you’re going to be seeing it a lot.
Not all surveys are these super-short time-and-temperature checks. In an employee engagement survey, one of the longer conversations you can get away with, your key metric (engagement) is likely made up of a series of questions. Rather than simply asking ‘Are you engaged?’, this metric is based on answers to an entire dimension of related sub-questions that get at the complexity of the concept of engagement. Again, keep this series of questions the same each time you conduct this project to ensure you’re comparing apples to apples.
And yes, because these questions are so critical, you can set these as mandatory. As discussed in the last round, you might need to include some demographic questions required to meet your slice-and-dice analytics requirements, but ideally you’ll be able to pre-fill and possibly even hide those questions so your participant can get right to it. (People you know really well don’t need a lot of preamble — since you’re likely in a constant thread of conversation with certain friends or family, in media res isn’t as strange a place to start as it would be with a complete stranger.) Require that key answer, then let your participants choose what else they want to share.
So you’ve got your key answer — your customer’s satisfaction or your employee’s pulse check rating, for example. What’s the next logical question to ask?
Once you find out how someone feels, in most cases you want to know why. A rating is a good indicator, but it’s not an explanation. Conversationally, asking someone ‘Why?’ might be opening a Pandora’s box that spills out more details than you can handle. At a certain tender age, many of us used ‘Why?’ as the perpetual conversation-continuer, looking for further explanations of the workings of the universe so that we could (a) get smarter, (b) spend more time with loved ones, (c) annoy the same loved ones, and/or (d) avoid ever having to go to bed.
An open-ended Text Box question is a simple way to ask participants to share their explanations, but remember that the Pandora’s box reference is intentional. Clarify the actual question itself to make it as precise as possible (‘How could we improve your experience next time?’ rather than ‘Anything else?’), then set the size and available character count to nudge participants to be as concise as they can.
Still, if you already have a pretty good idea of some of the variables that may be impacting your participants’ experience, a Key Driver question can help to highlight these points. Key Driver questions ask for quick ratings on a number of aspects (for example, Timeliness, Value, and Service), which can then be used in a Key Driver Analysis report to investigate potential correlations with the key metric. The important takeaway here is that a KDA report can help you to identify where to spend your resources to make the biggest improvement. If Timeliness turns out to be highly related to customer satisfaction, for example, focus there.
Again, longer projects use the same concept, with engagement surveys following the engagement dimension with a series of other dimensions about important elements of the employee experience (Communication, Recognition, Shared Values, etc.). An Engagement Report (surprise!) can help to identify the drivers having the biggest impact.
Note that follow-up rating questions don’t necessarily have to replace the Text Box. If you choose to have both, think about the order. In most cases, you’ll have the overall rating question first, followed by ratings on other variables, and then the Text Box. Here, the variable ratings help to filter out some of the feedback that would otherwise end up in the Text Box, making results easier to analyze and understand.
So, how many questions do I need in my survey?
This is never the right question. We could really just leave it at that.
If you’re playing Twenty Questions, your conversation is going to be limited to a pretty clear structure. In the broader human experience, though, language is (among other things) what sets us apart — and many of us love to use our skills. So, great, you came up with a lot of questions. Still, with this person standing in front of you, do you really need to ask all of those questions? In most cases, probably not. You brainstormed a lot so you can trim out what you don’t really need.
After all, you know a conversation is failing when the person you’re talking to asks, ‘Is this going to take much longer?’ Yikes.
How many questions should be on each survey page?
You know there’s no actual numeric answer to this question. If you want participants to see only one question at a time, use Focus Mode. It’s a great option to help them, well, focus, and it saves you from having to split questions across lots and lots of pages. Surveys designed for mobile often have a single question per page, but it’s not strictly necessary.
Here, advice switches a bit from the spoken conversation model to considering written conversation. How do you know which sentences belong in a paragraph together? How do you know which paragraphs belong in the same chapter or section? The best decision point here is to keep together related ideas. If you have three questions about a person’s initial experience when entering a branch of your credit union, keep those all together. Maybe the next page asks about their feedback when interacting with a staff member, and the last section asks for overall feedback about their visit.
Use the same topic-per-section approach we see in standardized tests. ‘This section will ask you questions about…’ Adding a comment/Descriptive Text at the beginning of each section is a good way for you to help you decide what belongs where and helps your participant to focus on one thing at a time.
If at first you don’t succeed…
Planning is important. It’s unlikely that you’ll sit down and write out the perfect set of survey questions in the perfect order in a single go. I used to remind my writing students that famous authors didn’t just sit down and open a book of blank pages and start typing in a masterpiece from start to finish. The same is true here.
Even if you’ve started drafting content offline, building your survey online means it’s a lot easier to pick things up and move them around. Using the right format can also help you make important decisions along the way, like whether a question should allow multiple answer options to be selected and how many characters it takes to fill a Text Box.
We’ll pick it up with rearranging and revising in the last part of this series — coming soon.