Feedback surveys need to be meaningful to both the instigator and the respondent.

yes no maybeI have recently been on the receiving end of two random pulse surveys. One is the automated survey that appears to come up after every ‘live chat’ session with Vodafone customer support. After weeks of trying to resolve the same issue, I’m losing the will to live let alone give anything back, even feedback.

The other was a bizarre telephone survey, at home, in the middle of my working day, to find out “How likely are you to recommend the George Street, Croydon branch of Lloyds Bank following your recent visit to the branch?”

Really??!!  For this they interrupt my day?

I was passing the bank. I went in to pay in a cheque. I could barely remember the ‘occasion’ even when prompted and I cannot think of ANY circumstances where I would recommend one Lloyds Bank branch over another in conversation with ANYBODY. I believe this is the sentiment I expressed when I brought the survey to a premature close.

In recent months I’ve spent a lot of time looking at how the IT function can bridge the gap between baseline system performance monitoring (are they ‘up’ or ‘down’) vs more insightful assessment – are they meeting the end user’s performance requirements.

With time and effort you can systematically track this. However, it requires a comprehensive understanding of each system’s function and interdependencies and consideration of a multitude of variables in order to get meaningful results.

In the meantime, there’s nothing to beat instant user feedback and I have experienced two variations on this theme.

The first was a very simple pulse check at the end of the working day. As each user logged off, they were asked to respond to the following question:

How satisfied were you with the performance of the IT systems today?
smiley emoticon   grumpy emoticon

The single-minded focus of the question and the use of a simple binary emoticon guaranteed a high level of participation (at least initially) and thus provided IT with a heat map of trouble spots. However, as the novelty wore off, participation waned with users less happy to give one blanket assessment on the performance of all systems. This sort of mechanism might be worth running for a short period every 6 months to track the overall performance of IT systems.

The second was a live database where users tick a box to show they are experiencing an outage or degraded service on a specific system.  The tick box mechanism could be completed in seconds. The inability to add any commentary distinguished this from the formal mechanism for requesting IT assistance on an inadequately performing system. Results are displayed in real time so IT can identify an escalating problem very quickly and individual results are automatically collated to show regional results.

Naturally, any feedback survey is useful only as long as it continues to attract user input, and this will only continue to happen if the issues raised are perceptibly addressed. Otherwise it becomes another meaningless tick box exercise …. which is how I feel about so many annual employee surveys – see my previous blog on the subject – http://wp.me/p2awEy-4B.

Advertisements

About madeleinekavanagh

Internal comms specialist with a career spanning advertising, car sales and management consulting. My greatest legacy (so far) - my son!
This entry was posted in Employee Engagement, Feedback surveys, Success factors and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s