Is Your Customer Part of
Your Benchmark Process?
Eventually, all contact center management gets the question from a C-level executive, “how does that compare to others?” If you haven’t been asked yet, I promise you that it is coming. Prepare yourself and use the response to your advantage by highlighting your understanding of the industry.
Benchmarking is a term that we have come to throw around casually and often times carelessly gather information to satisfy the need for a benchmarking data request. As a research scientist, the use of data points that are suspect causes a high level of discomfort. Rather than looking for any piece of information to produce a response to the comparison question, the source of the data should be critically examined. Is the information part of an official benchmarking study? Do you understand exactly who the participants are so that a better comparison can be made? Was the information self-reported by the participants and therefore possibly optimistically presented to gain a more favorable comparison? Are the variables being measured explicitly defined so all participants are responding correctly for an apples-to-apples comparison? Understanding the source of the data points is imperative as the integrity is affected.
As we gauge ourselves against others in the contact center industry, we do so in an effort to effect change to our service delivery. To create a competitive advantage, benchmarking became part of the way the contact center would set goals and operating targets. We want to be as good, preferably better, than our competition and this is the fact that needs to be demonstrated when asked how we compare. But one important question must be asked here – are we comparing ourselves to the right group? If the answer is no, or even you don’t know, the benchmarking data could be completely irrelevant. Most direct competitors are not willing to share their performance metrics and if they are, be somewhat wary. Therefore, we need to depend on benchmarking projects that provide scores of our specific industry.
The most important thing to keep in mind is that while there are several ways to benchmark the performance of your contact center, when all is said and done and the results are in, it is your customer that has the final say if you have best practices.
The benchmarking process can be divided into two parts: practices and metrics. The tendency, especially in contact centers, is to concentrate on the metrics part and forget the practices part. It is easy for managers to work with the metrics and quantitative targets to identify quickly where the contact center stands against the competition. It is very important to have comparable information so you can have the confidence to say we need to make some changes, because our performance is not where it needs to be. The problem with this approach is that you cannot identify why the gap exists just by analyzing the metrics. This is why the practices part is such a useful aspect of the benchmarking process. It allows you to identify the methods behind the metrics in order to identify why the gaps may exist. Just looking at the metrics can mislead and defeat the purpose of why you are doing benchmarking in the first place.
Another way contact centers seem to defeat the purpose of benchmarking is by chasing the performance goals of only one or two perceived leaders in contact center performance. Contact centers that do this will always be putting out fires and constantly have a new initiative that is doomed to fail because what is right for one is not always right for the other.
So lets say you end up going about your benchmarking project the right way. You combine metrics and practices so the numbers have life and can be understood more completely. You compare your metrics to a relevant sample of other contact centers that have similar attributes trying to get as close to an apples-to-apples comparison as possible. Based on these assumptions, the benchmark results should allow you to make well-educated business decisions on how to become a best in class contact center.
Whoa, stop right there. You are missing the most critical piece of the puzzle … your customer. The biggest question that results from a benchmarking report is how the metrics align with caller satisfaction. The efficiency data must be overlapped with the effectiveness data. Unfortunately, this is rarely an output from benchmarking projects and it is critical to the process to address this factor.
You know what your metrics are and now you know from benchmarking what metrics are identified as best practice for your industry. Validate your performance metrics with YOUR callers to determine if there are deficiencies. If there are no gains to be had by getting to the best-of-the-best metric, why announce that as a goal and expend the energy and resources?
Without the customer to validate your metrics, your decision making process is flawed. Therefore, before you set out to make big changes in your contact center because of what the benchmark results told you, take some time and find out what your customers think and whether the desired changes would even improve their service experience with your company. Your response to the C-level executive can either defend your metrics or defend your case to make changes if you have these customer data points.
To complement our customers’ operational metrics benchmarking data, we provided the Voice of Their Customer with our CATs (Completely Automated Telephone surveys) Meow benchmarking data. Looking at the best of the best scores (CATs Meow) and the average scores as an apples-to-apples comparisons of the CALLER opinions of service delivery are truly key data points. For one client last month, their overall call satisfaction received a top box score of 65.0%. The center was able to compare their caller evaluations to 63,378 other CATs scores that had an average of 60.2% and a CATs Meow of 68.9% for this exact question. This information, when combined with data from the benchmark project they participated in, has more meaning as it includes the callers’ perspective. A second topic of interest to them was the top box score of 55.8% on “treating the customer as valued” and for 52,744 comparable CATs scores, the average was 56.4% and the CATs Meow 64.7%. This is not a statistically significant difference from the average, meaning that they were the same, and can be an area to focus on for improvement to reach the CATs Meow level.
Consequently, you should not benchmark operational metrics anymore, right? That is not what I am saying. You need to take everything in account when answering the comparability question and realize who has the final decision on world class – it is not you, and it is not me, unless of course I am your customer.
Dr. Jodie Monger, PhD is the President of Customer Relationship Metrics, LC (www.Metrics.net). Prior to joining Metrics, she was the founding Associate Director of Purdue University's Center for Customer-Driven Quality. Her expertise is working with Fortune 1000 companies to help them quantify the Voice of their Customer.
Many more articles in Customer Relationship Management in The CEO Refresher Archives