I was recently asked by the US Postal Service’s website to complete an online customer satisfaction survey. I agreed to take it, and while I was completing the questions, I noticed that my patience with the survey questions began to wane. I thought some of the open-ended questions required too much information and detail. Some questions were also designed rather poorly. Instead of providing radio buttons to indicate a number between 1 and 10, I had to manually enter the number myself. Although that doesn’t seem like a big deal, it presents a UXD problem; brief customer satisfaction surveys should be brief and should reduce the amount of work that the survey taker has to perform, especially considering that he or she is voluntarily giving their time to complete the survey.
This USPS survey got me thinking of a somewhat strange question: shouldn’t surveys undergo usability testing? If surveys are used regularly by an organization to assess customer/user satisfaction, then I think it’s safe to assume that the organization is relying on the surveys to be usable/user-friendly for those who choose to take them. If a survey is difficult and/or frustrating for users, then their feedback could be negatively affected by its poor design. Some users might even abandon the survey before finishing, thereby reducing user feedback.
Is it logistically confusing to test the usability of a survey? In other words, is surveying a survey contradictory? After all, some usability tests include surveys themselves. I might, however, be over-thinking this issue. One simple method is to ask users to complete surveys with different designs, which could help determine a preferred survey design among most users.
Has anyone tested a survey before? If you have any experiences or thoughts to share, please do so!