Agent-level Customer Feedback and the Impact on First Contact Resolution
by Dr. Jodie Monger and Dr. Cherie Keen

Introduction

Contact center leaders are deluged with different Customer Relationship Management (CRM) solutions that promise to reduce costs, heighten customer satisfaction and loyalty, improve productivity and do more with less. But no matter which CRM solution or program you choose to implement, you must remember the most critical factor to your contact center and for your company’s existence -- the customer.

Many of these initiatives come from top-level executives, who are usually far removed from customer interactions, and many of the solutions presented appear to be the perfect answer to enhance customer satisfaction and loyalty by increasing First Contact Resolution (FCR). What to focus on with respect to the center’s performance is often based on benchmark data attempting to show that the contact center is in line (or not in line) with others. The most important thing to keep in mind is that while there are several ways to benchmark the performance of your contact center, when all is said and done, it’s the customer that has the final say if you have best practices. Although companies may have initiated several solutions that are supposed to improve customer satisfaction and loyalty, many are flawed because of their neglect in bringing the customer into the mix. One way to insure that these relationship management strategies are effective for the bottom line is to have a good voice of the customer measurement program in place.

The goal of our centers today and a main driver of cost via repeat calls and the implication on customer satisfaction (actually dissatisfaction) is to provide First Contact Resolution. This is one of the most important metrics to us, and yet is also the most illusive and miscalculated metric in a contact center. Contact centers have employed various types of technology and manual solutions to help calculate their FCR rate. While some of the data that is collected internally are useful, the reality is that none should or could answer the question on behalf of the customer. The customer is the one experiencing the possible pain from your service strategy and is your single best source to determine this metric.

Beyond knowing the metric by going directly to the source (your customer), being able to link caller evaluations to the agent has a profound impact on First Contact Resolution (FCR) rates and allows you to focus your training and coaching to the agents that need it most. Having this information at hand allows the contact center to provide the right training and coaching at the right time, to the right agents, adding up to not only a higher ROI for the Voice of the Customer program, but also a higher ROI on your training and coaching efforts. Increased FCR percentages yield higher contact satisfaction and contribute directly to customer loyalty.

To prove this point, Customer Relationship Metrics (Metrics) conducted a research project that provides evidence that real-time agent-level customer feedback has a significant positive impact on First Contact Resolution rates and creates a significantly higher ROI for your Voice of the Customer measurement program, the results of which are described in this paper.

Metrics selected two new clients for this research. Both companies began a Voice of the Customer program to collect feedback via an immediate post-call customer survey. While Metrics assisted both call centers through the development of the customer feedback measurement program, each call center implemented the programs differently. “Call Center A” collected their customer feedback using the service of Metrics’ Completely Automated Telephone survey system (CATs®). CAT surveys include both quantitative numeric scores and qualitative (customer comments) data. “Call Center B”, thinking it was more cost effective, implemented the survey script designed by Metrics with their own IVR system so Call Center B could only collect center level data and numeric ratings. Call Center B sent Metrics the data each month and Metrics returned the analysis and reporting.

In summary, Call Center A collected results that were directly linked to the agent that handled the customer’s call and could gather actual verbal comments directly from their customers. Call Center B collected results that were not directly linked to a specific agent that handled the call and did not have the ability to collect or report verbal comments from their customers.

The objectives of the research study were to assess the impact of real-time agent-level feedback on First Contact Resolution, measure the ROI of such a customer feedback program, and manage to the results to improve training and coaching of agents. The research focused on how First Contact Resolution would be impacted, and by how much, after the implementation of a robust CATs service solution versus an internal IVR data collection system.

The highlights of our research with the two call centers are summarized below. We found the following over a six-month period:

Call Center A

  • First Contact Resolution rate was increased by 10.4%;
  • Will have 3,120 less repeat calls a month to handle;
  • Have an annual savings of $187,200*
    (Note: simple direct cost analysis proves ROI for CATs to be 390%)

Call Center B

  • First Contact Resolution rate was decreased by 2.9%;
  • Will have to handle an additional 870 repeat calls per month;
  • Increased their annual expenses by $52,300*
    * Normalized using $5 cost per call.

Measurement Methods

Effectiveness of service delivery can be measured in many ways in a contact center. The ultimate objective of any quality program is to answer the question, “Are our customers being served well by our contact center?” The measurement methods utilized are critical to having a valid answer to the question. Quality is important to you because in today’s “commoditized” economy, the only differentiating and competitive advantage a company has is the service it provides. All too often companies that have the best product become extinct due to poor service.

Most contact center managers who focus on maintaining a high level of quality service in their center have implemented a call monitoring program. These programs include five to twenty live and remote monitoring occurrences a month of an agent’s contacts with different customers. Through this process, the agent is held accountable to a predetermined set of criteria. The criteria often include courtesy, verbal skills, listening skills, product or service knowledge, willingness to help, sales, and more depending on the industry and product/service.

The standard monitoring form may have been created by different people in the contact center, by a manager, or sometimes by a senior level executive that has a personal expectation of how a contact “should” be conducted. The theory is that if an agent follows the criteria set forth in the monitoring form, the customer will receive a high level of service and the company will retain them as a customer since satisfaction contributes to loyalty.

In reality, this theory leaves a lot of contact center managers scratching their heads in disbelief when any type of customer feedback is captured and the gap between the internal quality scores and the customers’ measured level of service is a negative double digit variance. How is that possible? What is the problem?

The problem is that all customers want to be taken care of efficiently and effectively, but elements to accomplish this vary from one person to the next. Therefore, creating a monitoring form with set criteria that must be adhered to for every contact may be inappropriate, contribute to the above mentioned gap, and even cause some caller dissatisfaction as the agent forces required criteria into the call. A monitoring form should be a dynamic instrument that can handle different types of interactions with the ultimate goal being, “The customer was satisfied and a repeat contact on this issue should not occur”. An internally estimated FCR value is possible; however the customer ultimately provides the input values for the metric.

The methods used most often to capture customer evaluation of service delivery for telephone interactions include: CATs (Completely Automated Telephone surveys), Email, and internal IVR (Interactive Voice Response). So then, what is the best way for companies to identify the customer’s perceptions? Some companies have realized that traditional survey methods to gather customer intelligence via the contact center do not always work and have sought better alternatives. While traditional methods seem to be the most cost effective, cheap becomes expensive as the voice of the customer is not heard accurately and quickly. Customer feedback that is gathered by a flawed strategy can easily mislead and ultimately misdirect your service strategy.

The measurement system that you use must reflect your mission to be customer-focused and to be easy to do business with and that means congruence with the customer-selected channel of communication. If this is not done, “survey channel slamming” occurs. Survey channel slamming happens when a customer sends you an email and you send them a survey through the post office, or when a customer contacts you by phone and you send them an email survey. This is dangerous to your measurement program and must be avoided. Ignoring customer preferences can generate a service weakness and may create a source of dissatisfaction; this undermines the validity of the voice of the customer measurement program. To best measure the effectiveness of service delivery, an immediate evaluation is needed via your customers’ preferred channel. This will ensure the success of your Voice of the Customer program as well as increase your customers’ satisfaction and loyalty. It creates the ideal situation that contact center managers are searching for.

Surveys on internal IVRs are gaining in popularity although not as quickly as Internet/email surveys. Each of these methodologies has their own drawbacks. A standard IVR is not designed to capture all of the elements necessary to do proper caller satisfaction research. Many companies have tried to upgrade their IVR system to include capturing customer data, often times having to compromise program necessities due to the limitations, therefore restricting their ability to produce actionable intelligence. Using an IVR as a collection tool, one must combat the internal resource limitations that are always great, design and analysis issues, trunk capacity limitations, reporting production, and the inability to capture and transcribe customer comments or to provide service recovery alerts.

CATs, while using IVR-type technology, is more of a complete service solution and the added value secures faster ROIs for the center. The system is more flexible, scalable, and provides greater depth in possible reporting options than an IVR. This system allows callers to be randomly selected to take a survey after an interaction with a contact center agent has concluded. Agents are unaware the caller has been invited to take the survey therefore no differentiation in the quality of service provided with knowledge of a pending evaluation.

This is a method that many contact centers use as part of their incentive pay program because it provides the most accurate results. The system also provides service recovery opportunities by providing Email service recovery alerts to key personnel in the event a customer rates an interaction poorly. The reporting of performance by the CATs system is also extremely flexible and contains in-depth analysis that can be posted electronically, hard copy, on an intranet or the Internet.

Quality Control is a Critical Component of Every Program

The greatest weakness of a Voice of the Customer program that is designed to hold agents accountable for their job performance, is the failure to have a back-end quality control (QC) process that leverages the customer comments. Every CATs measurement program includes such a back-end quality control process. Without a QC process in place, the potential for error is amplified - meaning the credibility of the measurement tool is decreased and your liability is increased. Many caller feedback solutions that capture the Voice of the Customer only capture the data and present the numeric summary results. If you are using such a strategy to capture evaluations from the caller and then holding agents accountable for the results, consider the difficulty of defending such data.

Surveying is a science. For the VOC program results to be scientifically sound and credible to the agent level, the QC process must take place and each survey must be consistently held up against a list of criteria. In the QC process, all of the customer verbal comments must be reviewed to ensure the scoring portion and agent accountability are defendable. All scientifically sound research programs contain a data cleansing process. You must not forgo this step in your eagerness to quantify FCR and service delivery.

An effective QC process on agent-level feedback holds surveys up to the “is this fair?” test. These examples are taken from the thousands of comments Customer Relationship Metrics collects each day for our many clients.

The Specialist that helped me with my problem was wonderful, however, the generalist, Bonnie Harris, that helped me in the very beginning was not. She got very aggravated with me when I had to be transferred to a specialist.”

Without the verbal explanation of the score and without a QC process, this survey would be assigned to the Specialist (as the last one to handle the interaction). By reviewing the comment and the survey scores in the QC process, it is apparent that the negative scores were actually meant as an evaluation of the generalist, Bonnie Harris. Based on the customer’s explanation, this survey and score must be removed from the Specialist and attached to the rightful owner - Bonnie Harris. Anything less than this type of “is it fair” test creates “noise” and hence liability, associated with the program and its results.

Terminating an employee based on performance, providing or withholding incentive pay, or giving a raise or promotion from unclean (non-QC’d) data is a problem waiting to happen. Team members need to be focused on the service ratings of THEIR customers and not presented with an opening to spend time discounting the survey results or building a case that the measurement program is not fair.

Another example:

The rep who assisted me was not very helpful. She had a very bad attitude and she disconnected me, which meant that I had to call back and have someone to connect me to this survey line. Someone needs to correct her on her bad attitude.”

This survey would be assigned incorrectly to the representative who followed the procedure to connect the customer to the survey. Without the QC process, the wrong employee would be held responsible for the scores. Quality control is paramount to the believability of the results and to providing sound information to the management team.

Another example:

I don’t know why you are asking me to make a suggestion for the rep. She was wonderful. I pressed ONE to be the best.”

Despite very clear survey instructions, this customer did not realize that ONE was the lowest score and would certainly be confused when the survey asked them to explain how the representative could improve. A clear explanation of what the customer intended to rate allows the data to be corrected.

One final example:

As far as I’m concerned, y’all handled my problem by getting XYZ company on the phone with us. Lets hope they take care of it like they said they would. It wasn’t your fault and you helped me. Thanks so much.”

The caller responded that the problem was not resolved on the CAT survey and technically that is true. However, the center assisted the customer with an issue that was beyond the company’s control. The gray area of this situation should not affect the resolution classification of this call. The customer’s survey comment, when asked to describe the unresolved problem, indicates that the score should be changed for the center from unresolved to resolved. If the customer finds that the problem was not corrected as promised by the other company, she will contact that company.

How often does Metrics see (and correct) such errors? Less than 5% of the time sounds low, doesn’t it? The average contact center collects 600 surveys per month so this error of 5% affects the payout or performance review for 30 agents every month. Think about how many times something has to happen in your center that creates a problem for the management team. How long would it take you to fix that many problems? How many of your agents would take advantage of this liability issue and lead to more questions of your management techniques and performance metrics?

The back-end quality control process for your agent-level feedback program will provide increased customer and employee satisfaction. In any case, surveys that do not pass the “is it fair” test affect every employee in your company – and rightfully so. Success or failure of your agent-level Voice of the Customer program is contingent on quality and should never create dissatisfaction for customers or employees.

So when selecting a Voice of the Customer solution, be aware there are many low-cost options that do not have the QC process available. But as you can see, cheaper can be very expensive.

Evaluating quality is not as simple as one, two, three. It is a thought provoking and passionate conversation for any contact center professional. In order to ensure you acquire a competitive advantage, a Balanced Performance ScoreCard, and have effective input values for ROI models, the key is to be consistent and properly analyze the feedback, turning it into actionable results in a continual improvement process.

Overview of CATs Relative to This Research

The ideal situation for a contact center manager is to have the customers’ perception of service quantitatively linked to the internal metrics currently used to manage the contact center. By having all of this data in one place, the manager is able to make better decisions based on both the Voice of the Customer and internal processes. The CATs program data can be linked to specific internal metrics, enabling a manager to understand how a change in one of the metrics will ultimately affect First Contact Resolution and customer satisfaction with the contact.

CATs works by greeting callers at the beginning of the call with a request to provide feedback at the end of the call to evaluate the interaction. This is done without the agent knowing of the impending evaluation by the caller. When they are done speaking with the agent, they are connected to a CAT survey. Because the request for feedback is immediate and non-intrusive, response rates are very high and confidence levels of 95 percent and higher are easily achievable.

This type of feedback system allows you to collect both quantitative and qualitative data in the customers’ own words, allows for branching of questions, alerts for less-thanacceptable service, and connects the results to a specific agent. By implementing such a system, the true voice of the customer can be captured. The flexibility of the ongoing measurement approach provides the platform to gather any additional customer intelligence. The contact center is leveraged to the entire organization by implementing a real-time survey program that can be fully hosted, requiring no hardware or software.

A strong emphasis can be placed on the Voice of the Customer program because CATs is a valid measure of the customer experience. CATs focuses on the real-time capture of caller perceptions on the level of service they received from the contact center. With a stringent quality control process in place, CATs can assure agent-level results are reliable and valid. For this paper, we focused on the results of caller satisfaction from the CATs solution and an internal IVR data collection method to identify the impact of agent-level feedback on First Contact Resolution and the ROI of both programs.

Introduction to the Research Project

For both contact centers, Metrics performed a thorough and comprehensive study to design the survey scripts to effectively quantify callers’ perception of the service. While the two centers had different methods to capture the feedback, there were several common questions for comparative analysis purposes.

The two centers were selected because they were easily comparable to one another. Each center averaged about 30,000 calls per month. Each center had roughly the same amount of agents answering calls. Since Call Center B chose to implement the survey via their own IVR system and Call Center A implemented CATs, Call Center B could not drill down to the agent level with the results. In short, this situation created an excellent opportunity to conduct a scientifically sound research project.

Research Methodology

The survey instrument was approved by the users of the information in each center consisting of representatives from the monitoring team and contact center supervisors. The survey was not changed at any point during the research process in order to provide consistent and valid data throughout the study.

Following the first measurement period, each center outlined the changes in the work process of the agents before and after implementing the caller feedback programs. Metrics then reviewed and revised the data elements required to quantify the performance and financial impact that the centers realized. Our primary concern was to be certain that we understood the process that was in place and that we were accurately portraying the contact center operation before and after implementation of the caller feedback program.

While Metrics clearly expected to find positive performance improvement, there were no guarantees. Whatever conclusions were made would be driven and supported by the customer evaluations gathered from the centers during the study. At a high level for Call Center B, their caller satisfaction reports from the IVR survey allowed center managers to see the impact of collecting real-time customer feedback for the center as a whole. But for Call Center A, the availability of CATs reports has focused the attention of every agent on the importance of each call. Any call may have been selected for a CATs evaluation. The culture has now absorbed the fact that each contact is a customer relationship that needs to be managed effectively. The introduction of CATs, if nothing else, has focused the agents on the need for call quality on each and every call. Since Call Center B did not collect agent-level data, it is not known if the results would have been as striking. The anonymity of the agents in Call Center B creates a “not my problem, it was only a few calls today that I didn’t handle well” attitude with respect to overall center performance and caller satisfaction.

Research Results

We examined the caller evaluations and measured First Contact Resolution rates over a continuous six-month period. Call Center A began the study with a First Contact Resolution rate of 36.4%. Call Center B began with a First Contact Resolution rate of 49.3%. This baseline of performance was established as part of the reporting process after the first month’s data. As the study progressed and agent-level feedback was collected by Call Center A, the results for this metric were dramatic.

Call Center A increased its First Contact Resolution rate by 10.4% while Call Center B actually experienced a decline in its First Contact Resolution rate of 2.9%. Further analysis by Metrics highlighted for each center the financial impact of the numbers and quantified the subsequent affect on caller satisfaction and customer loyalty.

Quantification of what these increases and declines in First Contact Resolution rates mean in terms of dollars can help contact center managers better justify the ROI encountered by implementing an agent-level Voice of the Customer program. Based on the 30,000 calls per month average, and a conservative $5.00 cost per call, if Call Center A continues to maintain this improvement they will have to take 3,120 less repeat calls per month at an annual savings of $187,200. On the other hand, Call Center B will have to handle an additional 870 calls per month for an additional expense of $52,200 per year.

As presented in the graph above, you will see there was a substantial net improvement in First Contact Resolution rates for Call Center A, while Call Center B showed a net decline. The main factor was because Call Center A was able to identify the key drivers of customer satisfaction for the CSRs that comprised a team from the real-time agentlevel customer feedback that it received through CATs. With the key driver analysis, they were provided actionable results to the agent level that was utilized in implementing targeted training and coaching to the individuals that needed it most, when they needed it most. Additionally, Call Center A was able to make changes in their call routing that resulted in callers being handled by the most suited agent. Without the agent-level feedback, Call Center B was not able to provide focused training and coaching and therefore did not receive the benefits experienced by Call Center A.

So, as you can see, Call Center A not only experienced reduced operating expenses from the decline in repeat calls but they also proved a higher ROI for training and coaching. Call Center B is unable to realize the type of return on training as Call Center A. Call Center B also struggled with extracting the data and delivering it to Metrics to enable timely processing of their customer feedback. Thus, by connecting real-time caller feedback directly to the agent providing the service has far reaching benefits. Here we focused only on direct costs…the impact on loyalty and increased revenues are significantly higher. It is all about having the right information at the right time and knowing the right thing to do with it.

Conclusion

From our research into the use of CATs, the agent-level customer feedback was able to:

  • increase First Contact Resolution;
  • reduce operating expenses;
  • reduce headcount;
  • increase productivity;
  • increase caller satisfaction;
  • lower cost per call;
  • increase the ROI on training and coaching efforts;
  • generate a high program ROI, realizing a 1-month payback.


Dr. Jodie Monger is the President of Customer Relationship Metrics and a pioneer in customer satisfaction measurement for the contact center industry. Prior to creating Metrics, Dr. Jodie was the founding Associate Director of Purdue University's Center for Customer-Driven Quality. Since 1993, Dr. Jodie has worked with numerous organizations, using her expertise to help them quantify the Voice of their Customer.

Dr. Cherie Keen is the Vice President of Research and Client Services for Customer Relationship Metrics. Prior to joining Metrics she was the Director of Research at SOCAP International (previously the Society of Consumer Affairs Professionals in Business), an International membership organization for customer care executives.

Customer Relationship Metrics delivers beyond the contractual scope of assignments through its diverse experience in customer relationship management. Our extensive research skills, knowledge of customer satisfaction drivers and reliability to deliver, is why clients choose Metrics. More information can be found on our website www.metrics.net or call 1-877-550-0223.

Many more articles in Customer Relationship Management in The CEO Refresher Archives

   


Copyright 2004 - Customer Relationship Metrics. All rights reserved.

Current Issue - Archives - CEO Links - News - Conferences - Recommended Reading