Reeling in Big ROI with
Analytics: How to Maximize Your Customer Feedback Program
If you currently collect customer feedback in your center, but only look at high-level summary results, it’s like being a commercial fisherman gathering the daily catch using a fishing rod and reel. To maximize their ROI (Return on Investment), commercial fishermen use a trawlers net rather than a single line, analyze the wind, tides, and currents, and numerous other variables to get the most out of their efforts. They know where to fish and what type of fish they are looking for when they leave the marina. Even a novice fisherman cannot imagine seeing a commercial fisherman fishing off the side of his boat with a single line for his catch. By only looking at isolated summary results from your VOC (Voice of the Customer) measurement program, you are essentially doing that very same thing.
Managers cannot look at data as a static condition – you have to look at the relationship between things; nowhere is such a relationship more obvious than in a contact center where one thing drastically affects another. There is a cascading effect – when you tweak one thing – it has an impact on other things. Therefore, you cannot look at data as a static summary of numbers, but rather you must look at relationships by applying analytics to the data. You certainly do not manage the center to one thing at a time, so why would you look at customer feedback data only one way?
Most of these customer feedback data reports provide very basic analysis – like frequencies and means – of the information from the caller satisfaction survey. While this information is interesting, it does limit your understanding of the callers’ evaluation of service provided by your center. From the basic analysis, you cannot accurately decide what elements will drive change in overall caller satisfaction. You will know how many are satisfied with the individual questions that you ask, but you will not know why they feel this way and how it impacts overall satisfaction. Without knowing the why, you do not have actionable intelligence that affects the resultant value, hence lower ROI, of your measurement program.
The goals of a VOC program are to collect data that can accurately accomplish:
The performance means and calculated impact values of each service attribute enable the quantitative identification of areas in which service performance may be below an acceptable level, which is defined by the caller, and the resulting impact on satisfaction is high. Management can, with this type of analysis, effectively address specific improvement areas that are drivers of satisfaction and therefore contribute substantially to the ROI. Without analyzing the relationships, you cannot determine which static number truly has an effect on the perception of service delivery.
For the “how are we performing” part of the analysis, the survey results from callers are normalized to a 100-point grading scale. This is possible to calculate no matter what scale you use for your surveys. Many of our clients use a 9-point scale for their immediate post-call surveys and they normalize the data in the following way:
Therefore, your answers will now appear on a 100-point grading scale. These normalized scores are then analyzed and the mean values for the questions (attributes) involved in the survey become the performance bars as seen in the figure below. These means will establish a metric for performance on each attribute that can be tracked over time. In research, a performance score of 85+ has been proven to be at a level high enough to contribute to loyalty, while those below become an area of focus for improvement. In this form, however, mean scores or averages remain a static measure.
The process to compute the impact scores will allow you to identify the callers’ assessment of how each of the questions (attributes) impact overall satisfaction. This process requires an additional layer of analysis beyond the calculation of Mean performance scores. The analysis will quantitatively identify how a change in one attribute can lead to a change in another and on overall satisfaction. The impact values are computed by regression analysis. This analysis considers the mathematical relationship AMONG the points of measurement so the results are relational, rather than static.
Regression analysis from caller satisfaction data will allow you to concretely demonstrate the amount of impact that each attribute has on overall satisfaction. This will allow you to monitor the areas of change that will raise caller satisfaction in your center and allow you to concentrate on the areas that are most important to your customer. Since you can provide this analysis for each center location, for each team, for each queue or for each type of customer, better defined results and areas of change will become apparent. Therefore, you can maximize your coaching and training by implementing change for the specific areas that need it most.
By using this analysis, the aspects of service combine to produce an effect that is perceived by the customer. Rather than examining a static attribute-by-attribute situation, which will not identify the improvement areas, regression will examine the interaction of the attributes, giving you a much more robust analysis. Building a regression model will allow you to examine the callers’ overall perception of the service received during the call. An example of a model, in words, would be:
The rating of overall satisfaction with the CSR is a function of the CSR’s ability to quickly understand the reason for the call (Q1), attention to the details (Q2), being treated as a valued customer (Q3), the product and service knowledge (Q4), the completeness of the answer provided (Q5), the confidence in the answer/solution instilled by the CSR (Q6), and demonstrating ownership of the issue (Q7).
That model, mathematically, is:
Overall Satisfaction = f (Q1, Q2, Q3, Q4, Q5, Q6, Q7)
By running the regression analysis, you can see which attributes impact caller satisfaction the most. Once you run the analysis and have the output, first examine the R-square value – this will tell you the amount of variance in caller satisfaction that is explained by the attributes that you have selected to include in the model (in our example above, there are 7 attributes selected). If your R-square value is above .400, it is considered to be a good model. To explain this value, if your results show an R-square value of .5126, then 51.26 percent of the variance in caller satisfaction is explained by the attributes that are included in the model.
Next, examine the impact that each individual attribute has on caller satisfaction by reviewing the B value. This is the Beta coefficient. The higher the value of B, the more impact that attribute has on caller satisfaction. Then look at the significance values (t-values) from your output. If the values are .05 or less, they are considered to be statistically significant. Those that have the highest B value and are statistically significant have the most impact on caller satisfaction, which tells you that a larger ROI will be found by focusing improvement on such issues.
The drivers of satisfaction, identified by the analysis, must be combined with the measures of performance to present a complete picture of the relationship AMONG the attributes. Referring to the chart below that Customer Relationship Metrics provides to its clients, the drivers of satisfaction are marked with an asterisk because each is statistically significant (t < .05). The impact values on the left are the B values (weights) from the regression model discussed above. The performance scores on the right are the Mean scores for each question as rated by the caller and transposed to the 100-point scale. Representing the analysis pictorially assists the users in easily identifying the improvement opportunities with the greatest potential for increasing ROI.
The visual format clearly identifies Q2 attention to the details, Q3 treating as a valued customer, Q5 providing a complete answer, and Q7 taking ownership of the issue as the drivers of satisfaction for this group (center, team or queue). The areas of focus should be those that have the largest impact on caller satisfaction with the lowest performance scores.
Below, the results of the analysis are displayed in a decision matrix. This clearly identifies the attributes that offer the best opportunities for action by the center (team or queue). To generate customer loyalty, a score of 85 is set to be the definition between high and low performance. Those attributes that are statistically significant are shown in the top row and those that are not are in the bottom row. The challenge is therefore to move these attributes into the “top box” to be a service differentiator, in the upper right box. As the example shows, attention to details has a high performance score (over the 85 threshold) and is a driver of satisfaction; therefore it is in the top box.
Those that do not reach the 85-point performance threshold AND are statistically significant are the attributes that require some investment to move them into the top box. And since they have the most impact on caller satisfaction, improvement will give you the most bang for the buck. Those that are in the lower row can be monitored since these do not require an investment at this point in time. In this example, quickly understood and confidence in the solution have high performance scores, but are not drivers of satisfaction and therefore are not prioritized for action. But should the results of this attribute change over time and move into the top row, the priority would be changed.
This example shows that the customer evaluations cannot be reviewed as a static number. The relationship in the matrix is right to left (acceptable or not) and top to bottom (a driver or not). This is a continuous process - as you improve and invest in important areas, identification of the change is required to prove an ROI for the improvement initiative.
The drivers of satisfaction results will be different for different sets of data. One team versus another may have different strengths and weaknesses or one queue versus another. The power in this analysis is providing such a view to pinpoint areas of opportunity. With this scrutiny, you will be fishing for the right data with the right equipment, which leads to the right analysis for creating change in the center. The results from the VOC program will show a high ROI from your investment in time and resources for service delivery improvement.
Customer Relationship Metrics recently completed a case study (see http://www.metrics.net/Articles.asp for the full text) that proves the differences in not only how you collect the data, but also how you analyze it, is a negative ROI versus a very large positive ROI. In the negative ROI case, the company has to deal with an increase in their annual expenses by over $50,000 and had to handle over 800 additional repeat calls per month. In the positive ROI case, the company had an annual savings of almost $190,000 and is handling over 3,000 less repeat calls a month. These positive figures also transfer into increased productivity, lower costs per call, and a lower headcount – none of which is included in the $190,000 savings. This company realized a one-month payback on their VOC program and increased their ROI of not only the caller feedback program, but the training and coaching efforts as well. So, put away your rod and reel and let’s really catch some fish.
Dr. Jodie Monger, PhD is the President of Customer Relationship Metrics, LC (www.Metrics.net). Prior to joining Metrics, she was the founding Associate Director of Purdue University's Center for Customer-Driven Quality. Her expertise is working with Fortune 1000 companies to help them quantify the Voice of their Customer.
Many more articles in Customer Relationship Management in The CEO Refresher Archives