Joe Andrews & Tali Bartal 29 min

How Auto QA Elevates your Post-Sales CX


Quality Assurance has long been the Achilles' heel of support operations, but it doesn’t have to be. This session will confront your assumptions about QA, demonstrating how automation can elevate your entire support experience. You’ll see why relying on manual checks is no longer enough in today’s fast-paced environment.



0:00

All right, I'm Joe Andrews again and joined by my colleague, Tali.

0:07

We're going to talk about AutoQA and by definition we mean quality assurance

0:14

and quality monitoring,

0:16

just a show of hands.

0:17

Who here is doing AutoQA today or any QA, manual QA?

0:24

Anyway, okay.

0:28

So, what we want to do today is talk about the opportunity for AutoQA, the use

0:46

cases,

0:48

what it means for the business, the impact, sort of cost considerations, trade-

0:54

offs.

0:54

I'm going to do just a little bit of TIP around that, but we want to spend most

0:57

of the time

0:58

with Tali actually showing you the product, support logic, elevate our own Auto

1:06

QA solution.

1:07

So with that, I'd start with the thinking around that a lot of companies are

1:15

faced with challenges

1:17

around agent retention.

1:19

It's harder than ever today, you're all aware of supporting organizations where

1:28

firefighting

1:28

leads to stress, things like escalations, things like upset customers, just are

1:35

part

1:36

of support engineers and support agents every day.

1:40

And so the opportunity to coach and to help enable your agents and your support

1:48

teams

1:48

is tremendous.

1:50

But the reality today is that coaching takes a lot of time.

1:54

It often gets left to spend less time than we want.

2:01

And the majority of cases happen when it's not a good time.

2:07

The feedback is inabsorbed.

2:09

Reviews take a lot of time.

2:12

When you're in a manual situation, you're only looking at a fraction of the

2:15

data.

2:16

We'll talk about that in a second.

2:18

And then there's no real consistent coaching practice.

2:22

So typically quality assurance or quality monitoring organizations are set up

2:29

teams to

2:30

accomplish a couple things.

2:31

One is around coaching and another is around compliance, like ensuring that

2:39

certain process

2:40

steps are done with your customers.

2:42

So just curious, like for the ones who are looking at AutoQA solutions, are you

2:47

most

2:47

interested in the coaching side of things?

2:51

Or is it more compliance or something else?

2:54

Coaching.

2:55

Yep.

2:56

So it's all focused on coaching.

2:57

What about over there?

3:00

There's interest or looking at AutoQA?

3:03

A little bit.

3:04

A little bit of both.

3:05

Okay.

3:06

These are -- so agent retention is harder.

3:09

Coaching can have a tremendous impact on agent retention.

3:13

But this is the reality, right, for those of you who are using manual QA

3:19

solutions.

3:20

Typically it's a sampling process.

3:24

It's similar to surveying customers.

3:26

You're picking one to two percent, maybe, of cases.

3:30

And typically you're -- it's either random or it's manual selection, which is

3:39

even worse

3:40

because there's more bias introduced.

3:43

And often it's delayed.

3:45

It's after the fact.

3:46

Maybe it's once a week.

3:48

Maybe it's at the end of a month.

3:50

And it's subjective.

3:52

So it basically barely scratches the surface of where organizations could be

4:00

and should

4:01

be in terms of their agent coaching practice.

4:05

So this is what support logic set up -- set out to solve with our AutoQA

4:10

solution using

4:12

AI.

4:13

We monitor -- we ingest 100 percent of all of the cases, all of the

4:19

conversations, the

4:20

interactions that happen with cases.

4:24

It's close to real time.

4:25

So it's timely, it's contextual, and it's consistent and objective because the

4:31

machine

4:32

learning is actually powering the scoring.

4:36

So that you can adjust the rubric, the algorithm, right, the factors that are

4:43

important to coach

4:44

on, but the bottom line is everyone is viewed through an equal lens.

4:51

And this will help to transition your QA organization from laborious scoring to

5:00

more

5:00

of a value-add coaching organization.

5:03

And that's where I think everyone wants to be.

5:06

I'm seeing some head nods here.

5:11

And these are the primary use cases that we're going to talk about.

5:15

Manual and automated QA -- there are uses for manual QA, but at a baseline you

5:20

want to

5:21

start with automated.

5:22

You want to ingest everything and use that as your starting point.

5:27

We want to talk about coaching feedback loops and how do you build this into

5:32

your business

5:33

process and ultimately about how do you do it more efficiently?

5:39

That's the big challenge because large organizations that have a QA process,

5:46

let's say for a thousand

5:47

agents, you typically have 20, 25 QA people on a team that are doing that.

5:56

And then how do you ultimately measure things like customer effort score, how

6:00

do you ensure

6:00

compliance, zero tolerance policies around behaviors that are not meeting the

6:07

mark?

6:07

So these are all the things that we're going to talk about in terms of the

6:10

solution just

6:11

really quickly.

6:12

Oh, sorry.

6:13

>> You might have asked questions.

6:14

>> Yeah, absolutely.

6:15

>> On the use case of the career, if you go back to life, it makes a ton of

6:16

sense to

6:16

do this on send and analysis of something talking and then in our A.M.

6:24

around the common cause.

6:31

>> Do you want to address that?

6:38

>> Yeah.

6:39

>> Yeah, I think that, oh my, sorry.

6:56

Yes, there is some solutions around it as well.

7:00

We have some configurations that you can address particular QS cases.

7:04

So if you need particular keywords, you will see when I'm in demo for those

7:08

assignment

7:09

Qs, when we decide what cases we're going to QA, both for automation and the

7:15

manual,

7:16

you can restrict to particular keywords, for example, or particular text

7:22

phrases that you

7:23

more care about them.

7:25

So you definitely have a control over it.

7:28

And the good thing is that you will see the Qs that you kind of querying, it's

7:34

not a singular

7:35

one.

7:36

So you can create as many as you want.

7:37

So you can create one is more technical, one is more coverage, like white

7:42

coverage,

7:43

et cetera.

7:44

You will see my demo and then you can ask more technical questions.

7:47

>> It was very strong.

7:51

>> Yeah, so the question similar is, is the coaching feedback around the

8:01

communication

8:03

skills or the technical feedback?

8:05

The answer is both and we'll get into that.

8:08

>> Yeah, I'm sorry.

8:11

It's very similar.

8:14

>> So, as we said, there is a matter of fact, if you are in a situation where

8:20

you are in a

8:33

situation, you can try what you need.

8:47

You obviously probably need to make it updated based on your timeline, but you

8:55

do have to

8:56

make sure that it's flexible enough for you.

9:05

>> These are great questions and we're going to get into the product so you can

9:09

actually

9:09

see it.

9:10

I just want to touch on a couple more points.

9:12

We work across all major systems of record on the left.

9:16

Multilingual and multimodal, we're ingesting different data streams from

9:21

different channels.

9:23

And you can see general outputs, escalations, behaviors, brand trust safety,

9:30

performance,

9:31

performance is where you get into the technical aspects.

9:34

We will show more details of that.

9:38

Really quickly wrap up a couple slides and then we'll get into the product.

9:42

You can see comparing traditional or manual QA against auto, typically tickets

9:48

reviewed,

9:49

low single digit percentage versus auto QA 100.

9:53

You can see what that means for the human resources, the team that's doing it,

9:59

typically

10:00

one analyst for 25 agents if you're doing manual, where it's one for 100.

10:06

If you're doing auto, you move from coaching feedback that's sporadic to near

10:11

real time,

10:12

scalability, rigid and expensive versus flexible, rigid because it's largely

10:18

people resources.

10:20

I'm not going to go into this in depth.

10:22

We're happy to do an ROI analysis.

10:24

We actually have a pretty in depth calculator on this, but again, assuming for

10:29

about a thousand

10:31

agents, you have an annual cost from right sizing the QA team.

10:37

There is an opportunity to shrink the number of full-time QA analysts and then

10:44

you have

10:45

improvements in operating expenses, OPEX, that come from lower handle time,

10:51

fewer repeat

10:51

calls, etc., improved quality.

10:56

We can run through this calculation for your situation if you're interested in

11:01

that.

11:02

Let's quickly get over to the demo because I think showing is more powerful

11:05

than talking

11:06

about it.

11:07

I can try to hold it down.

11:10

Can you hear me?

11:12

Yeah.

11:13

I'm usually a loud person, so I don't want to be too loud.

11:15

Closer?

11:16

Good.

11:17

All right.

11:18

I'm going to show you some use cases that we do with our auto and manual QA.

11:25

I'm going to demo from three percent a kind of point of view.

11:30

The first one is the regular support manager or support leader that all what he

11:35

needs to

11:35

know or wants to know is in any second of his life what is the quality of his

11:42

support.

11:43

So it's the question that is always hanging there but we not always have answer

11:48

for it.

11:49

So now with the tool we can easily come from the core product, click on the

11:54

elevate and

11:55

launch the elevate solution.

12:06

And once we're coming to the elevate you immediately can focus on the score

12:12

cards and

12:14

pick auto QA.

12:17

So auto QA is where we do auto QA on top of all the cases all the time.

12:24

This is something that is unbelievable.

12:28

It's a full coverage.

12:29

No need any manual interaction.

12:32

So theoretically any case that comes in with any interactions within the case I

12:37

will go

12:38

to the case level just to showcase you the coverage.

12:41

We are doing the full QA based on the score card that you decide to score.

12:48

So you have a flexibility of what behaviors we scan, what skills we want to

12:57

score and

12:58

based on all this setup we running on every single case and doing the

13:02

automation.

13:03

So you as a manager come to this tool and immediately can see that in the last

13:08

30 days

13:09

my QA score of my organization is whatever number it is.

13:14

The good news is also that you have another information on the screen such as

13:19

trends.

13:20

So if you have enough data and QA is running you can see over time you can do

13:25

it weekly

13:25

or monthly and you can see several trends running and see how the QA is

13:32

evolving.

13:34

You can also create different groups by the interest.

13:37

I have a trick examples here that I can drill down into them and actually

13:45

perform the analysis

13:46

of this QA, Auto-QA in this example on those particular groups.

13:52

Groups can be groups of agents, it can be areas, it can be whatever you want to

13:57

structure

13:58

into the group.

14:00

Lower on the screen just to get you familiar with the flow of the screen you

14:05

have those

14:06

fields and behaviors.

14:07

As I said every scorecard will have a variation of those.

14:11

So in the skills you will have for example here I have the opening, the

14:16

efficiency, the

14:17

empathy etc.

14:19

And the behaviors will take us one level down.

14:24

When you can see for example for the opening we will have metrics like greet

14:28

ings and assistants

14:30

and introduction.

14:31

So if the agent will not do one of them we will grade him the ways that it

14:37

defined for

14:38

example negative.

14:41

Below you can see exactly the same breakdown for those behaviors just on the

14:48

group level.

14:49

So let's just drill down to show you how impactful it is.

14:54

If you go down to the team this is the group that I picked you will see the

14:58

exact statistics

15:00

on the team.

15:02

Again the same trends, the breakdown of the agents within the team and the

15:07

metrics of

15:08

the skills and behaviors.

15:10

And I can go all the way down to the individual agent.

15:14

So again as a manager if I want to start from the top and just get an overall

15:19

understanding

15:20

what's going on in my organization I can do it from the top, I can drill down

15:24

to the

15:24

group, I can go to the individual agent.

15:27

From the individual agent obviously you can see all the statistics but you can

15:32

also drill

15:32

down into individual case if you identify some trend that is kind of for

15:43

example negative.

15:46

Let's say I picked up now a critical one of the agents here in the system and I

15:51

can see

15:51

that on the greetings she has all the time red score.

15:57

Everything is not going well there.

15:59

I can write from here to say what is the negative and it will bring me to this

16:06

screen

16:07

where it will filter for me the agent, the 30 days of experience and the

16:14

behavior I

16:14

was on and show me the exact cases where the behavior was caught.

16:21

So if I open one of the cases again I'm going through the route that kind of

16:25

investigation

16:26

when I want to build up and kind of bring some value when I have a conversation

16:31

with

16:31

the agent it will lend me right into the case and will show me all my scorecard

16:38

on the right

16:38

side.

16:40

So as we said the scorecard is flexible this is what we decide for the auto

16:43

assignment

16:44

and this is what you see.

16:46

On the opening you see there are trick criteria and one of them is negative on

16:50

the introduction.

16:52

If I will look at the behavior here I can see hello person name is our

16:56

reduction so we

16:57

redact data we have a service of reduction if we want.

17:03

Thank you for reaching out to me and etc.

17:05

So we already see that the agent did not introduce himself or herself.

17:10

This is something that we point out as a negative and this is what the system

17:14

thought and this

17:15

is why the score is negative for the introduction.

17:21

The other parameters here are positive and some of them are not scored so I can

17:26

as an

17:30

auditor if I'm the auditor start my manual QA right on top of this automation

17:36

and edit

17:37

it further in order to complete fully the QA on the case.

17:41

I will show it in a second.

17:44

And eventually once the full QA is complete the email sent to the agent to

17:50

notify him that

17:51

the manual QA was complete on his case.

17:54

The agent will get an email and will need to acknowledge the email.

17:59

We can define the time frame of the acknowledgment it doesn't need to be right

18:02

away we can decide

18:03

like in the next three or five days.

18:06

And once he acknowledged the QA that done on his case he can review obviously

18:12

the results

18:13

and raise something that we call dispute.

18:17

Dispute is if there is anything in the score that he was given is not as he was

18:22

expecting

18:23

and he disagree he can resume and explain what is the item that he is disagree

18:30

with and it

18:31

will go back to the editor and I will show you the flow of the dispute.

18:36

Just to point out that this case has all kind of variation of the data.

18:42

For example we have also voice file on the case.

18:45

So the QA done not just on the comments but also on the chats that link to the

18:52

comment

18:52

to the case as well as the voice.

18:55

On the voice you can see it is highlighted in different colors those are the

19:00

signals.

19:01

Those are the elements that we are listening to and tagging as a QA and those

19:07

are also

19:08

converted into the scorecard.

19:10

Plus you can see I just went through the case and I saw some foreign language

19:15

on the case

19:16

here.

19:17

So I was curious I checked on the translation.

19:22

This case is already translated and the QA is done on the translated version.

19:27

We can see the original version if we flip it to the original.

19:31

So the text was given as an original we did the translation and we applied the

19:37

auto QA

19:38

on the translation.

19:39

So just to show you the powerful that you will not just take the comments as

19:43

they are.

19:44

If needed translation we translate them.

19:46

If there is a voice file or chat file we will break it down and work on it as

19:52

well.

19:53

So now just to show you as an auditor what else can I do.

19:59

As you already saw the auto QA is like time save right?

20:03

Everything is done for you.

20:04

You just see the stuff and you can drill down to the points that interest for

20:09

you.

20:09

But there is also always a question if I want to manual QA stuff and I want I

20:15

have auditors

20:16

that doing what manual QA.

20:18

What should I do?

20:19

Like what cases I need to focus on.

20:21

This is what we have on our assignment board.

20:24

So theoretically we can create an assignment which is the list of cases for

20:29

auditor, assignment

20:30

for the auditor and let him to work on the cases in the assignment.

20:34

Now in the create assignments area this is when we decide what is the

20:39

assignment list

20:40

going to be.

20:41

It can be a single list like I literally cherry pick the cases or I defined the

20:47

filter

20:48

for the cases and I send it as an assignment auditor do the assignment and the

20:53

list is over.

20:54

I also can create a list that is re-occuring and this list will kind of have a

21:00

filter like

21:01

a query behind it and will feel itself with the cases based on the filtration

21:07

that we

21:07

provide and to radically let the auditor revisit this list over and over and

21:15

the cases will

21:16

be there.

21:18

So just get some examples.

21:20

Somebody asked me here how do we control what we QA.

21:24

So here you have all the flexibility to do some filtration.

21:29

Ffiltration can be based for example on the signals.

21:33

We can find out only the cases that has individual signals or individual LTE

21:40

for example predictions.

21:44

We can filter out cases that have customer efforts score high or low.

21:51

Another example.

21:53

We have different keywords.

21:56

We asked what about technical terms.

22:00

So we can put here some technical terms and then we will target those cases

22:04

that have

22:04

those technical terms into this assignment queue, assign it to the appropriate

22:09

auditor

22:10

and he will take care of those cases for the manual QA.

22:14

Obviously you have trivial stuff like how many ticket going to be in the queue

22:19

or what

22:20

is the due date of the queue.

22:22

Those are the basic default.

22:25

So all those assignment lines, assignment queues will be in this list and you

22:30

can see

22:30

him, see those assignments.

22:33

You can see how many cases assigned to each one of them and if you can start

22:39

playing it

22:40

as an auditor you come to the page which will tell you now you have five cases

22:46

in this assignment

22:47

line that you need to manual co-accurate.

22:49

You just go through the manual QA, you start your QA, you pick up the responder

22:56

and you

22:57

just go in with your scorecard and obviously every selection can have a comment

23:07

, etc.

23:09

One more thing to mention on the assignment.

23:11

Since most of the organizations have more than one editor and a lot of time

23:16

editor have

23:18

more experience or editor is new, they are not always aligned with how they do

23:24

the QA.

23:24

So we want to make them trained and aligned and kind of consistent across the

23:29

board.

23:30

So we have a tool that we call calibrations.

23:34

This is something that we can create.

23:36

It's very similar to assignment when we create a queue with several cases in

23:42

and we assign

23:43

it to several auditors.

23:45

It doesn't really do QA as an agency but the report is go to the manager.

23:51

So I can see each individual editor how he performed the manual QA.

23:57

I get the report and then we can review it as a group and kind of align us and

24:01

bring

24:02

our knowledge base to the similar baseline.

24:08

So dispute as we said once the manual QA done the case is sent to the agent,

24:13

agent get

24:13

the case, he might be happy or not, he will acknowledge.

24:17

And then he might raise the dispute.

24:20

The disputes will be visible for me here as an auditor.

24:24

I will see my disputes and then I can go in and see what was the dispute about.

24:30

For example, let's see.

24:34

So this is the dispute I got.

24:37

I can see what was the dispute, who raised the dispute, what was the comment

24:46

about it

24:47

and then I can resolve the dispute.

24:49

Again, when I resolve the dispute it will open the editing mode and I can say,

24:54

yep,

24:55

you write, I missed and correct the score or you can say you are not right and

25:01

then explain

25:02

again how.

25:03

And once you resolve the dispute it will be cleaned from your page and agent

25:09

will get

25:10

a notification about it.

25:13

Scorecards just to touch it very briefly, as I said, we can create several

25:18

scorecards.

25:19

The manual one is the base one which we want to have perfectly set up because

25:23

we want a

25:24

great auto QA.

25:26

But you can create also a lot of manual variations here.

25:31

The creation is through the button.

25:33

So you can see here all the information, it will have the skill, it will have a

25:38

behavior,

25:39

you can design which one you tag, you can decide the weighting of them, you

25:45

have a flexibility

25:46

and obviously the grade itself if it's negative, positive or any other

25:54

combination.

25:54

And one more thing to say, we love to welcome everybody, all the users into the

26:00

platform.

26:01

So it's not only designed for the managers, it's designed also for agents.

26:06

If agents will come here, they will see the disputes that they raised.

26:10

They also can visit the tickets of interest.

26:13

This is kind of a dashboard.

26:16

We have different widgets with different stuff.

26:19

Joe mentioned previously, for example, ZTP tickets.

26:23

If we have tickets that are cases that have some not polite language, I don't

26:28

want to

26:29

be rude.

26:30

Or cases that we have some confident information shared both from the customer

26:36

or from the

26:37

agent, we want to be very cherry-picking on those and make sure that we're not

26:41

losing

26:42

them.

26:43

So those will appear on this gadget.

26:45

We also have a feature in both Elevate site and the core product when we can

26:50

open the case,

26:52

even if it's not closed, like through the lifecycle of the case and flag it to

26:57

the manual QA because

26:58

we see some potential there and we want to not forget about this.

27:02

This will also appear in one of those gadgets and you will see all manually

27:07

tagged cases

27:07

here so you don't need to kind of figure out how I find the case or whatever.

27:14

I think I covered most of the stuff.

27:17

There is more functionality here, like in the feedback area, so you can record

27:22

the video

27:23

through Loom explaining some information about the case, about the dispute,

27:28

about whatever

27:29

you see in the application and you can share it with your team.

27:33

Very good for training for coaching as well as getting feedback.

27:37

So more is here.

27:44

Yes I can get back to the open QA which is the just summarization of all.

27:50

If I go back to the dashboard, I started with the Auto QA just to focus on and

27:55

then explain

27:56

a little bit about the manual.

27:58

But theoretically on the dashboard you can see both of them.

28:01

So the summarization of both gives you a perspective both on the general score

28:06

as well as the

28:07

customer effort score.

28:09

So just to make sure that we're not just doing QA on the cases in order to

28:14

score our

28:15

agents.

28:16

We're also analyzing all the account, all the customer kind of data and giving

28:21

a C set,

28:22

I don't want to call it C set because it's not a survey.

28:25

It's just something we're doing automatically across all the cases.

28:28

You can hear the power of it, right?

28:31

We don't need to send a survey anyway.

28:32

We don't need to wait and beg the customer to reply the survey.

28:37

We just do it automatically based on the data and believe me, it's good quality

28:42

without

28:43

any additional effort.

28:45

So you see it right away here on the dashboard.

28:48

It covers all the kind of functionality over and manual automation side by side

28:55

split here

28:56

on the dashboard.

28:58

You can see them set beside.

28:59

See, cover, touch.

29:01

Yeah