Customer service is all about customers being satisfied with the goods or services offered. But how can you measure a client service team to be assured that it is doing a good job?
Metrics like Reply Time or First Contact Resolution don’t give an insight into how well the customers are treated. Satisfaction surveys might not give the full picture of how the client is being treated.
So how do you make sure a customer support services team is providing the service you are paying them for?
One way to ensure support quality and prevent any inappropriate replies is to implement Quality Assurance (it’s also known as QA, quality assessment, interaction or conversation reviews). At SupportYourApp we use a “Ticket Review”, but we’ll use QA throughout this article, for convenience purposes only.
In this article, we'll cover:
What Exactly Is QA, When It Comes to Customer Service?
Quality assurance aims to deliver the highest standards of customer service and maintaining these standards by continuous evaluation of agents and resolution processes. This process also allows us to identify malfunctions and to improve customer service.
With over 9 years of providing customer support services to over 200 companies globally, here are the practices we’ve found to be the best when working with our clients. We hope that our way of conducting QA might give some useful insights for those who have never done it before as well as those who are looking for some fresh new workable ideas.
Evaluation Criteria
At the core of our QA strategy, we score our agents’ performance to give us a measurable metric.
We have identified the following four categories:
- Grammar
- Emotional intelligence (Empathy)
- Helpfulness
- Knowing the customer
The categories consist of several criteria each, such as the usefulness of the reply or how ‘robotic’ the agent sounded during the conversation with the client. Every criterion is scored from 1 to 5 with 5 being the maximum score possible. The points are counted and an average total performance score is assigned for each particular ticket.
However, with time this method proved itself to be rather biased — the evaluation process was done by different people and the scores were very subjective. Another problem we faced was the large amount of time that this process usually took. The results of the survey didn’t tell us much about the quality of customer assistance we were providing. They also couldn’t set a benchmark that would allow us to raise and maintain the level of quality of our services. We decided to overhaul our QA process instead of giving scores for different criteria QA managers pick among a list of possible causes.
Just as with scores, our new ‘cases’ system revolves around evaluating agents using different groups of criteria, such as grammar, spelling, product knowledge, conversational skills and so on. This process of evaluation can be adapted to any communication channel including email, phone, and live chat. Previously these categories were generalized and now we can evaluate agents more specifically. This helps to identify areas for improving the agent’s overall performance.
- Using a lot of «tech» terms in communicating a solution, when the customer is not tech-savvy.
- Extra space OR an extra line break in the written structure.
- The message bounces from topic to topic, there’s no clear lead from start to finish.
- The agent asked for additional details when there was enough information to solve the case.
- Violation of the replying on-time policy.
- The provided solution led to bad consequences / there was a direct violation of the product policy.
These cases are grouped into Minor Issues, Major Issues, Deadly Sins, and Outstanding performance. We believe that positive reinforcement is just as useful at ensuring great service as noticing the mistakes (more on team motivation later). However, if there are at least 2 minor issues or 1 major issue, the bonus points won’t be applied. Typical ‘outstanding performance’ cases look like this.
- The agent helped with an issue that wasn’t acknowledged by the customer / showed initiative
- The agent was able to upsell to the customer
The benefit of our criteria list is that it could be applied to practically any company. SupportYourApp is providing customer support for different companies and industries and it’s important that our QA process is suitable to any of them without major changes. So in case you are trying to create a QA strategy for your own company — keep in mind your company’s specific features and customize your ‘cases’ accordingly. For example, specific ‘deadly sins’ for a particular project could look like this
- The agent sent a troubleshoot for the wrong OS.
- The screenshot of the issue sent by the customer wasn’t saved and was automatically deleted.
- The agent copied and pasted some portions of instructions from the internal KB without changing the formatting. As a result, an email had different fonts.
- The agents made screenshots and shared them through public screenshot/picture services.
Score Calculation
A problem with evaluating quality using numerical scores is that it’s not objective, as different people have different views on how serious some mistakes are. Now, with our new ‘cases’ system, the weight for each mistake is being designed by the QA engineer.
Each evaluation is given a base score of 100% and is either decreased by issues or improved by bonus points. One minor issue “weighs” 15% on the whole score, however, two minor issues take 25% and so on.
So what should be considered a bad score? By adjusting criteria weight, it’s possible to make the final scores appear higher or lower. This can be used to influence team motivation as a low result might discourage them and eventually affect the overall score.
For example, a score of 40% is more likely to upset or demotivate an agent than a score of 70%, even if it means the same level of performance. By adjusting criteria weight and setting the bad score threshold higher, you can improve motivation among your team while still being able to identify critical performance issues.
How Many Tickets to Review
Reviewing 100% of tickets is not always possible and the number of tickets for our in-house QA review is normally about 10% of all tickets.
If you sell a premium product or service, you might want to review more tickets to ensure top-notch customer service quality. Also, you could check more tickets of a particular customer if they are among your key accounts. Apart from randomly picked tickets, it is also important to include critical cases to avoid making such mistakes in the future.
Frequency of Reviews
Every company decides for itself how often to hold QA sessions. At SupportYourApp we (most often) carry them out at the end of each month. This frequency allows us to keep our service level high to meet our clients’ expectations.
What’s Next?
After gathering all the data it is time to analyze it. Now with the resulting scores, we can see each agent’s performance and overall team score. To get the most out of the QA data we group the results as follows.
- Agents. This includes making weekly and monthly trends for each agent’s performance so that we can follow each individual’s development notice any improvements or problems.
- Tiering. If you have different levels in the support team, it makes sense to monitor and compare their performance. The usual division is level 1 (or tier 1) handling general tickets while level 2 digs deeper into the technical side of things and is able to thoroughly troubleshoot or even fix problems on time for the client.
- Products. When having two or more products it is essential that you keep track of how well each of them is being supported. If launching a new product, it will help spot any support problems in the early stages.
- Languages. By evaluating how well each language is being supported, you can make a decision to invest in training our existing agents or to hire native speakers.
We’ve set up an alert system that notifies the supervisor via email whenever any ticket gets a score below the 85% threshold. It allows us to prevent more issues happening before we get to look at the weekly agents’ stats.
When the weekly score falls under 80%, we arrange a meeting with this agent to discuss their performance. During such meetings it is important to learn about why the agent’s performance is underperforming expectations, are there any troubles regarding workflow and how to find a workable solution. There’s also a system of penalties and rewards to stimulate agents’ performance. Penalties are only used in extreme cases and always as a last resort.
Hiring Someone to Do It
Customer support QA is just one area of responsibilities, it’s easy to move on to other priorities when the company (or at least customer support) is expanding. So, naturally, you might start looking to hire or train someone to become a QA manager. Finding the right person to do that is just as important as the evaluation process you have. The most prominent traits to look for are:
- Impeccable language skills
- Great communication and empathy
- Team lead skills
- Logical thinking
It’s a good practice to have a QA manager work as a customer support representative for some time. Depending on the product complexity and the number of unique issues, 2 to 8 weeks should be enough for most cases.
Customer Support QA Software & Tools
Very often QA process requires some additional tools and automation. At SupportYourApp we use just two tools to do it all — Google Sheets and Google Apps Script.
With Google Apps Script, we’ve built a ticket review submission form, where the QA manager enters all ticket info (ticket link, agent name, channel) and picks relevant ‘cases’.
After data is collected, it is stored and organized using Google Sheets (don’t underestimate Google Sheets — some businesses build their entire processes around this tool with practically infinite use cases). The results and data can be processed to create reports, email alert notifications, or used for presentations later.
This combination is a great option if your resources are limited and at the same time, you want to be able to control the entire process. However, these days there are plenty of ‘ready-to-go’ solutions that could be tailored to the needs of almost any business. Here are some of the best tools out there:
- Playvox. It is a great tool to evaluate your customer support team’s performance that can be integrated with various CRM systems or used as a standalone solution. It also allows you to create your own QA cards and customize them to your own needs. More than that — with Playvox your agents can become a part of the evaluation process and receive real-time feedback.
- Klaus. It allows you to filter the tickets for evaluation, compare the results over time, and choose categories for ratings, such as empathy, technical knowledge, procedure, grammar, and others. You will be able to see the full conversation history for each ticket including hidden comments as well. For those, who are concerned with their data safety — Klaus is fully GDPR compliant. It also provides seamless integration with such tools as Zendesk, HelpScout, Aircall, Freshdesk, and others.
- MaestroQA. Just as Playvox, MaestroQA is compatible with plenty of CRM tools. It is also great for monitoring agents’ and teams’ performances, giving feedback, and making these processes automated. Not to mention that this tool makes your support quality assurance evaluation process omnichannel.
Final Thoughts
Quality assurance is a necessary step on the road to gaining excellent customer service. Before you start implementing these practices, think of what excellence means and make the best out of the QA process to get the maximum results.
Bonus: Customer Service QA Checklist
- Define evaluation criteria
- Grammar
- Spelling
- Conversational skills
- Emotional intelligence
- Helpfulness
- Personalization
- Product knowledge
- Channel-specific
- Workflow (correct handling of CRM/software/tools)
- Define score scale
- Scale (0-100%, F-A+, 1-5)
- Base score
- Criteria weight
- Bad score threshold
- Evaluate
- Build process (spreadsheets, scripts, or third-party software)
- Define how many tickets to review
- Prioritization (for specific customers or support channels)
- Apply results
- Create weekly/monthly reports
- Group reports as applicable (by agent, language, product, etc.)
- Set up a feedback and improvement process
- Set up a bonus or penalty system
- Set low score alerts
- Manage
- Add recurring QA checks as events in a calendar
- Train supervisor/team lead to do it
- Hire QA manager
❤︎ Like it? — Share: Share on LinkedIn or Share on Facebook

Nika is an independent digital marketer with a real passion for the world of customer service who has been working in the field for the last 10 years. She was born and raised in Kyiv, Ukraine which is where she lives now with her adorable Yorkshire Terrier. Nika loves doing a deep research of modern customers’ behavior which makes her a real pro in understanding their preferences and interests. She loves travel, technical diving, and sports.
Posted on