However, receiving that feedback is pretty important when developing new products and understanding the users "perceived" reason of why they do the things they do. Only so much can be inferred from web metrics.
You say that like there aren't better alternatives. There are. Some involve other forms of user feedback than the traditionally structured survey. Of all the things you could choose, what people say is the least predictive of anything. Strangely enough the surveys get a free pass from the people employing them. That the data received goes largely unquestioned and unaccountable, nor tested against alternatives, makes it doubly harmful.
Time and again people find 65% to 85% reporting they were "satisfied" or "very satisfied" defect. If these people gave you any product development advice, you're building your product for non customers.
Let's say you're a developer. And you want accurate data collection which is predictive of the features users want. You can issue "points" per dollar spent. Only customers can use the points and vote proposed features up or down -- plus this offers an incentive for registering. You can only spend a certain number of points, which causes a very different user psychology.
"Fund a feature" interaction is much better at sorting out what people say they want from what they're willing to pay for. And even if what you're offering is at zero cost, findings are highly relevant. Even Drupal uses Donorge -- donation based fund-a-feature. (Just FYI, Drupal has a survey module)
At the very best satisfaction levels are highly misleading. They're used about the same way people install web counters. The scripts are easy to set up. And they give a false sense of assurance.
I'm not even going to open the "yes, but this way we don't have to do a user test" can of worms.
A company conducted focus groups for their Product X, which had as its main competitor Product Q. They asked people who were using Product Q, "Why do you use Product Q instead of Product X?" The respondents gave their reasons: "Because Product Q has feature F," "Because Product Q performs G faster," "Because Product Q lets me do activity H." They added, "If Product X did all that and was cheaper, we'd switch to it."
Armed with this valuable insight, the company expended time, effort, and money in adding feature F to Product X, making Product X do G faster, and adding the ability to do activity H. They lowered the price and sat back and waited for the customers to beat a path to their door.
But the customers didn't come.
Because the customers were lying. In reality, they had no intention of switching from Product Q to Product X at all. They grew up with Product Q, they were used to the way Product Q worked, they simply liked Product Q. Product Q had what in the hot bubble-days was called "mindshare", but what in older days was called "brand loyalty" or just "inertia".
-- People lie on surveys and focus groups, often unwittingly
Surveys are yet another example of bad interaction design (flash, hit counters, blink tags) driving out good interaction design. The only difference is blink tags don't feed you disastrously misleading data.
Just like people gravitated to hit counters and blinking, flashing junk design
, in total denial of the user psychology, surveys are going to end up as the blink tag of the future.
Related:I Repeat: Do Not Listen to Your UsersSurveys unsatisfactory
is interesting as it provides a barely whispered hint at why surveys are really employed: a mental sedative. Actual survey use is similar to the joke about why a drunk uses a lamp post -- for support, not illumination. People want validation of the status quo, not information which might rock the boat.
Not to mention the trivial fact: Satisfaction levels don't give you the "why" any more than anything else does. Which brings me to the subject of Beyond Being Satisfied
, "They report that the rationally satisfied customers, although extremely satisfied, lack a strong emotional attachment to the company." They were satisfied in the past, but they won't be loyal in the future.
It really doesn't matter if they wrote an essay about the reason, if they don't then follow it up with action whatever they say to explain the satisfaction is worthless. If you're then going on to base some action or design decision on people who are no longer customers, the data is less than worthless ...it's sending you off in the entirely wrong direction. The Customer Driven Death Spiral
. What happens when you design a product for the median of customer response, rather than develop the insights two, three or five distinct customer segments -- each with very different objectives and desires -- are being put into one big bin and averaged? The customer death spiral, AKA doing a little something for everyone in the market disappoints each segment making up "the market."
Edited by DCrx, 08 March 2008 - 07:00 AM.