Corporate IT support capabilities – whether called an IT help desk, IT service desk, or something else – have been around for three decades and, for a good part of this millennium, the end-user (or employee) expectations of them remained somewhat static. As did their performance, with what can be called traditional IT support metrics showing that performance was “good enough.” Then, in the last decade, what was termed “the consumerisation of IT” raised the game for corporate IT organisations – not only in terms of the provided devices, apps, and cloud services but also the “service envelope” that surrounded IT service delivery and support.
However, this was only the start of the changes that would pressure IT support teams to better serve the employees within their organisation (or the customer organisation if an outsourced IT support capability). The need for better employee experiences has also grown significantly in the last half-decade, with it more recently being positioned as the improvement of employee productivity. Then, the global pandemic brought with it remote working where the need for technology was amplified, and the importance of IT support efficiency and effectiveness grew with it.
So, how good is your IT support capability? Importantly, is this what your metrics say or what your end users think? There can be a big difference when the wrong measurements are used.
To help, this blog explains the issues with traditional IT support metrics and offers up an alternative method for better gauging the performance of your IT support capability.
The issues with traditional IT support metrics
If you stop to think about the metrics traditionally employed in IT support, they’re often related to “How many?” or “How quickly?” – for example, the total number of incidents handled or the average handling time – and can be considered operationally-focused rather than concerned with the end-user experience and the outcomes that are achieved.
So, they’re usually related to the “mechanics” of IT support – with the emphasis on supply-side process execution rather than the demand-side result. It’s why IT support performance metrics can show that all in IT support is working well but the workforce thinks otherwise.
There are commonly other measurement issues too. For example, that performance is measured at the point of service creation rather than consumption, i.e. in the wrong place, or that the IT metrics overlook what’s most important to employees.
Of course, not all traditional IT support metrics are about “How many?” or “How quickly?” with customer satisfaction (CSAT) questionnaires the obvious exception. But even this “customer-centric” IT support measurement might not be feeling the true pulse of end-user sentiment.
Why even CSAT questionnaires have issue
Sadly, the use of CSAT questionnaires isn’t a valid safety net for the above-outlined issues. They do have value, because they give employees the ability to feedback on IT’s performance, but there are issues with them too. For example, there are three common visible “symptoms”:
- A low response rate
- Feedback skewing, with only great or poor service experiences reported on (and the employees that aren’t using the corporate IT support capabilities might never be canvased)
- A delay in feedback receipt
The first of these issues might be as a result of various “root causes” caused by the CSAT questionnaire design. For example, that there’s:
- Also an operational focus in the questioning rather than on what’s important to employees
- So many questions that the respondent “gives up” or ignores it
- Limited opportunity for respondents to provide valuable context to their feedback
- Minimal historical evidence that feedback is acted upon to improve performance
But all three of the symptoms might also be caused by the CSAT questionnaire delivery method, i.e. via email, perhaps only to those employees that have engaged with IT support. Where the request for feedback is lost within the employee’s constant stream of work emails such that it never gets the visibility necessary to elicit the required feedback. Plus, of course, the employees that don’t use the IT support capabilities likely have no opportunity to explain why.
A solution for turning CSAT questionnaires into the springboard for IT-support improvement
The solution here needs to address all of the key underlying issues, i.e. the causes rather than the visible symptoms.
First, the design of CSAT questionnaires needs to be based around the end-user, rather than simply being dictated by the service provider, such that it addresses the above-reported design issues. For example, that it’s quick and easy to provide feedback and the questionnaire asks about what matters most (to the end user), including the ability for them to feedback specifics.
Second, the feedback request needs to be timely and clearly visible to the end user, not hidden within the sea of “more important” emails in their email inbox. This is also linked with both questionnaire design and the visibility of related improvements, with end users – if they do see the feedback request – likely to otherwise think “I don’t have the time to complete this questionnaire” or perhaps even “I don’t have the time to complete this questionnaire, especially when they do nothing with my feedback”.
Third, and this is an extension of the second point, feedback needs to be elicited from all employees, not just those that are still happy to use the corporate IT support capability. For example, to ascertain why some people aren’t using the capability – is it because they’ve had no IT issues or because they choose to either struggle on without help or to seek assistance elsewhere (with both options likely caused by their previous IT support experiences)?
Ultimately, you need to receive the employee feedback that allows your IT support capability to accurately gauge how well it’s doing and to identify and drive the necessary improvements in the right places.