Max Greene & Kenneth Law 31 min

How to Slash Your Customer Escalations with AI


Traditional escalation practices are a resource drain on your support organization. This session will reveal how AI can revolutionize your escalation management, cutting response times while ensuring only the most critical issues reach higher levels. We’ll challenge the notion that escalations are merely reactive and show you how to anticipate and manage potential crises before they arise.



0:00

All right. So just a brief, brief intro is right. As Judy said, my name is Max.

0:06

I'm a

0:06

senior customer success manager with SupportLogic. I've been here at SupportLog

0:10

ic for about

0:11

three years now, working with many of our, many of our customers in their

0:15

journey to

0:16

reduce escalations and also to just become more proactive with AI. Right? And I

0:21

'm joined

0:22

today by one of our esteemed customers, Ken, from 8x8. And Ken, would you mind

0:27

introducing

0:28

yourself for the audience here? Sure. Hi, I'm Ken Thon, the director of digital

0:32

support

0:32

at 8x8. I've been a customer of SupportLogic now for upwards of, I believe, in

0:37

almost four

0:37

years. And it's been a fantastic journey thus far. Awesome. So I'm going to

0:44

assume that

0:45

this is the clicker. It is excellent. Right? Okay. So what we're going to cover

0:50

today,

0:51

challenge and impact of customer escalations, a bit of proactive escalation

0:55

management

0:56

and how SupportLogic predicts escalations. And Ken's going to tell a bit of the

1:00

story

1:00

of how 8x8 has been successful. And as well as we'll talk a little bit around

1:05

the practical

1:06

way of how do you do this, right? How do you implement a proactive workflow for

1:12

reducing

1:13

escalations and being preventative in that capacity? So here's up on the screen

1:22

here.

1:23

You'll see here are some of the kind of typical challenges that we see pers

1:26

isting in support.

1:28

This should be very familiar to some of the many of the folks here. Things like

1:33

constant

1:34

firefighting, not understanding the true voice of the customer and cost center

1:40

versus revenue

1:41

center. Ken, what do these kind of mean to you? How have you seen these

1:45

impacting your

1:45

organization? I mean, these are major pain points for us. And that's what drove

1:50

a lot

1:50

of our escalation rates as high as they were. You know, inability to really

1:54

kind of identify

1:55

what the customer issue was. Inability to really manage our own time sometimes.

2:00

It was

2:00

just one of the biggest pain points for our customers was just timely updates.

2:05

Getting

2:05

back to them in a timely manner to really recognize that they had issues they

2:09

needed

2:09

to resolve quickly. And we weren't able to kind of take action on them as fast

2:13

as quickly

2:13

as they wanted them to us too. Yeah. How about how do you typically, like how

2:19

do you

2:19

typically, you know, you see here we talk about focusing on the wrong metrics

2:23

for making,

2:24

for dealing with escalations, right? What would you typically measure beyond

2:29

just escalation

2:30

rate to try and improve a proactive experience? A few things we really looked

2:34

at were, you

2:35

know, total time resolution milestones. And one of the things that we noticed

2:42

when we

2:42

began using the support logic tool was that we weren't really quite clear on

2:46

what our

2:46

milestones were. Neither were our customers. And we may have been measuring our

2:50

milestones

2:51

incorrectly, setting the wrong expectations based upon what customers wanted to

2:54

see.

2:55

Awesome. Okay. So, yeah, I mean, these are typical challenges, right? Near and

3:00

dear to

3:01

all of us. One that I actually don't see here that I want to call, that I'd

3:05

like to call

3:06

out, right? And now you've been asking a question about Ken. Sure. Is the

3:08

impact of escalations

3:10

on morale, right? On the employee experience, right? From what I've seen, like

3:16

support engineers,

3:17

support managers, support teams in general, they maintain a kind of a pride of

3:21

ownership

3:22

in resolving a case, right? They want to believe that they can resolve a case.

3:26

Engineers

3:27

in particular. And so they often don't ask for help quickly enough, right, to

3:31

get ahead

3:31

of things. And that can lead to escalations, which can have an impact on morale

3:35

, right?

3:36

There's nothing worse than being an engineer and being the one who is owning a

3:40

case that

3:40

all of a sudden is escalated and you're having leaders coming to you about it,

3:44

right? So,

3:44

how have you seen the impact of escalations in general on morale in the

3:48

organization among

3:49

the support teams? Absolutely. It's actually kind of coincidentally, you asked

3:52

that question

3:53

because I was thinking about it earlier today when Krishna was talking about

3:56

kind of the

3:56

hard cost and the soft cost of escalation cases. And beyond the fact that you

4:01

have a

4:01

cost per case and that if that case is escalated, you have exponential cost on

4:06

top of that because

4:07

of other people you have to have involved, SMEs, managers, specialists, whoever

4:12

. There

4:13

is that inherent cost to the person as well, that mental health deficit also,

4:18

that each

4:19

time a case becomes escalated, whoever is handling it gets chipped away a

4:24

little bit.

4:25

And let's be frank, we love our customers. But customers are terrible. I'm a

4:31

customer

4:31

myself, I recognize how terrible I can be, especially when I'm at that

4:36

escalated state

4:37

and I can take a good chunk out of a person when I'm on the phone with them.

4:41

And that

4:42

happens to our agents day in and day out when you have escalations. So, the

4:46

fact that we've

4:47

been able to reduce our escalation rate to the point we're at means that the

4:51

teams that

4:51

have been handling these escalations are feeling much better. They're happier,

4:56

they're more

4:57

content. And because of that, you see an improvement in not just their

5:01

performance, but the CSAC

5:03

for our customers as well. So, it's been definitely a huge win for us.

5:08

>> All right. So, what are some of the long-term customer impacts of, what are

5:14

some of the

5:14

long-term impacts on customers of firefighting? Right? 96% of customers say

5:19

support experience

5:20

is important in their choice of loyalty to a company. 82% of support leaders

5:25

are actively

5:26

looking to shift to more proactive support models that directly drive revenue

5:30

growth.

5:31

Ken, what do these figures mean to you? >> They mean everything. Obviously, we

5:36

put in

5:36

time, we put in effort to ensure that our customers are happy, their issues

5:39

resolved.

5:40

And every time we're able to save a case from becoming escalated, that's just

5:44

that better

5:45

word of mouth that can reach our customers and potential customers from that.

5:49

So, it

5:50

just really helps to reinforce the brand value that we've been trying to drive

5:54

day in and

5:54

day out. >> Terrific. Yeah. Okay. So, let's talk a little

6:00

bit about the impact of escalations on revenue. Right? So, why do escalated

6:04

cases carry a higher

6:05

cost for the business? Now, there's a number of different aspects of this,

6:10

right? One of

6:11

the biggest problems with an escalation is when a member of C-suite or leaders

6:14

get involved,

6:15

all of a sudden the cost of the case itself skyrockets, right? The hourly rate

6:20

of your

6:21

support engineer who's working the case may be manageable, right? But the

6:25

hourly rate

6:25

of your CEO or your CCO is a very different thing, right? Plus the human cost

6:32

on morale

6:32

as well of getting a senior executive involved directly with the team members

6:37

trying to troubleshoot

6:38

the issue, right? >> Yeah.

6:39

>> Solution development on the engineering side when you have to bring in

6:43

engineering

6:43

resources, right? As well as just the duration of the case and obviously the

6:47

risk to the

6:48

bottom line of customer churn, right? What do these kind of call out? What do

6:52

these mean

6:53

to you and how do you see them impacting the business?

6:56

>> It's funny because I see C-suite in the box right away and that hits home

7:00

pretty hard

7:01

because with, you know, platforms like LinkedIn, it's become very easy to reach

7:07

a CEO or a

7:07

CCO and it's happened to us multiple times where we've had to take action where

7:12

very

7:12

simple cases suddenly have, you know, C-suite level attention which is bad for

7:17

the customer

7:18

but it's really bad for us too since we're supposed to be handling this all in-

7:21

house.

7:23

On top of that, rapidly trying to come up with solutions that we could have

7:28

really spent

7:29

time and invested time and effort and resources into building, you know, from

7:33

the start as

7:34

opposed to rapidly trying to do it right away impacts our ability to scale and

7:40

control

7:40

our costs. And obviously each time this happens repeatedly with customers, the

7:46

greater the

7:47

chance of churn and, you know, for us having, you know, trying to protect our

7:51

revenue, protect

7:52

our subscriptions and our customers, it is paramount to success.

7:56

>> Great answer. Thank you. Okay. So proactive escalation management, right? We

8:04

know that

8:04

escalation in general, escalation management in general has typically been a

8:09

very reactive

8:10

process, right? And part of the promise of AI has been to make it a more

8:14

proactive process,

8:16

the ability to get ahead of these things, right? And so just kind of, when we

8:20

think in

8:20

terms of this, I imagine 8x8 initially, right, was very reactive in their

8:25

strategy around

8:26

dealing with escalations even if you wanted to be proactive. So what does

8:29

proactive escalation

8:31

management actually mean from your perspective?

8:34

>> For us, it meant identifying potential cases that were on the potential of

8:39

becoming escalated,

8:40

identifying, you know, some root causes of why these cases were on the cusp of

8:45

becoming

8:45

escalated and trying to address them beforehand. The idea of being proactive

8:50

and seeing these

8:51

issues that customers had not kind of flame up into something much bigger than

8:57

they necessarily

8:59

needed be. Really kind of adapting our service to ensure that we weren't making

9:03

the situation

9:04

worse than it needed to be also.

9:06

>> Right, yeah. What are some of the outcomes that we would hope to achieve

9:13

from being proactive

9:14

on escalations? Obviously the reduction of escalation, right? We talked about

9:17

the improvement

9:18

to employee experience as well, right? The improvement to see that. Any other

9:23

things that

9:24

kind of jump out at you?

9:26

>> One of the things that happened was, well, outcome wise, we were seeing

9:31

escalation rates

9:32

anywhere from 10 and we hit a high about 13%. And as we began to use the

9:38

support logic

9:39

tool, we saw those rates drastically fall. In fact, we're at a point now where

9:43

we're

9:43

probably sub 2% on any given day, which is fantastic. But what the tool allowed

9:49

us to

9:49

realize is that we had a lot of procedural engagement issues. One of the key

9:56

pieces was

9:58

we were talking about how we were speaking to our customers. Our porting team

10:02

began to

10:02

use support logic. And our porting team typically had the high escalation rate,

10:08

but they also

10:09

had very low morale. And one of the reasons we've gotten to notice was porting

10:14

numbers

10:14

is a very procedural task. It's very mundane. It's basically you fill out a

10:19

form, you submit

10:20

it, it's been correctly submitted, and you wait for the carriers to take action

10:25

and actually

10:26

do the number porting itself. But during that period of time, the customers

10:30

left kind of

10:30

waiting in limbo, wondering if things are working, if things are progressing

10:35

right.

10:36

And we did what we do. We had a very procedural means of doing this. We had tem

10:40

plat, templatized

10:42

email messages. And that wasn't enough for our customers. And we didn't

10:46

recognize this.

10:46

From our quality assurance perspective, we did everything we needed to do. We

10:49

checked

10:49

all the boxes. That could be 100% of QA. Doesn't necessarily equate to 100% CS

10:56

AT. So it really

10:57

let us realize that we weren't engaging our customers appropriately. Customers

11:01

wanted

11:01

to have that warm fuzzy feeling. They wanted to recognize that their issues

11:05

were being

11:06

taken care of, and that even if it was taking a long time, even though it was

11:10

technically

11:10

out of our control, that we were kind of with them. And we understood, we were

11:15

monitoring

11:15

them, letting them know that things were progressing in the manner they were

11:18

supposed

11:18

to. And that we recognized that it required some patience. And there's a way

11:25

for us to

11:25

make that engagement nice, painless, as opposed to what we were doing, which

11:31

was letting them

11:32

suffer in silence a little bit. And kind of seeing that happen again and again,

11:37

being

11:37

shown this through the predictive model made us a lot of us to make that change

11:43

. And in

11:44

that sense helped us drop our escalation rates, but also made the team far more

11:48

proactive,

11:49

far more engaging. And they're a much happier team as a result of that.

11:53

Awesome. Yeah, you talk about suffering and silence there, which I think is

11:57

always, it's

11:58

an interesting thing, right? We don't want to provide empty updates to

12:01

customers necessarily,

12:02

but we do want to make them feel like we're constantly right alongside them in

12:06

this process

12:07

of getting their issues resolved, right? Yeah, customers don't want that single

12:12

here, follow

12:12

the knowledge-based article a solution anymore. And with the advent of

12:16

technology, we shouldn't

12:18

be providing that as a solution. Yeah, great. Okay. So I think some of you have

12:24

seen this

12:24

slide, you've probably seen this slide already this morning. This is kind of

12:28

the benchmark,

12:29

this is a benchmark in bedrock of what support logic does, right? So we analyze

12:35

all of the

12:35

unstructured text in your conversations, interactions with your customers,

12:40

right? We heard in some

12:41

of the panels earlier that 90% of customer communication is unstructured data,

12:46

right?

12:47

Metadata makes up a small, an important portion of it, but a relatively small

12:51

portion of it,

12:52

right? So the way support logic looks at this, right, is we extract and categor

12:58

ize all of

12:59

the, a variety of different sentiments and signals across your cases, right?

13:02

Those range

13:03

from positive sentiment to negative sentiment, things like frustration and

13:09

confusion, as

13:10

well as just customers expressing urgency on getting their issues resolved or

13:14

that they're

13:15

facing a critical issue. And what support logic does with this information,

13:19

right? It

13:19

takes it and rolls it up into a series of scores, a sentiment score, an

13:24

attention score,

13:25

a support health score. We've talked about a little bit, we've talked about

13:29

these things

13:29

and you've heard about them a little bit already, but one of the key things

13:32

there is with the

13:33

scores that we create, right? You're maintaining context over the life of the

13:37

case, right?

13:38

We know that this case is trending in a positive or a negative direction, right

13:43

? Escalation

13:44

prediction, it's not a binary thing, right? It's a series of actions and

13:48

activities and

13:50

feelings that happen over the life of a case that take it in a direction that

13:54

can eventually

13:55

lead to a case being at risk of escalation and actually escalating. The beauty

14:00

of this

14:00

is that many of those factors are measurable and they're measurable in advance

14:05

and that

14:05

allows for the ability to predict these things in advance and at the very least

14:10

point out

14:11

cases that are at risk, right? And if you can be notified of cases that are at

14:16

risk,

14:17

you can start to be proactive. You can build proactive workflows around

14:22

addressing these

14:23

issues before they come to a head. So in kind of a nutshell, this is really

14:29

what support

14:30

logic does in how we predict escalations, right? It's based in the actual

14:35

conversations

14:36

with your customers and there's a wealth of information there that we leverage

14:39

for this

14:40

purpose. Kind of you're pretty familiar with this at this point, right? Pretty

14:46

much. But

14:47

anything specific about this that jumps out at you?

14:53

So one of the failures that we had that jumps out at me, it was that when we

14:57

initially adopted

14:59

and began to implement support logic, we weren't quite using it the way that we

15:04

was suggested

15:05

to us. We had a little bit of arrogance on our side and we thought, you know,

15:08

we're

15:09

just going to inject this into our current case management process, manage it,

15:13

figure

15:13

it out, it'll be fine and it really wasn't. And a big part of that was we were

15:19

very set

15:19

in our own ways and we used email alerts to notify us when cases were

15:24

potentially going

15:26

to escalate and we may have taken action on them, but one of the missing pieces

15:29

was we

15:29

weren't letting the tool know that whether it was correct or not. It wasn't

15:33

learning

15:34

from what we were doing. And we worked with support logic more, you know,

15:38

support logic

15:38

was great in partnering with us and began to build in buttons in the email

15:42

alerts to

15:42

allow us to do that. But we still weren't fully adopting to that, which meant

15:47

that for

15:48

a better part of a year, year and a half, even though we were using the tool,

15:51

even though

15:51

it was actually identifying potential escalations, it wasn't really learning

15:58

from it. And it

15:59

took us probably a year and a half of adoption before we began to really feed

16:03

back into the

16:04

tool, began to kind of build upon a lexicon so it can begin to identify even

16:08

greater

16:09

expanses of cases that could potentially escalate. So it's a big learning part

16:14

for us and it's

16:14

a big piece that really stands out to me now.

16:16

Yeah, it's interesting. Adopting a new tool is, you know, it's only, technology

16:22

is only

16:22

part of it, right? The change management piece is the other part of it. And

16:26

building

16:27

a proactive escalation practice when you're used to just having a reactive one,

16:31

right?

16:32

That's a big part of that change management as we go through this.

16:35

Yeah, pretty key.

16:36

Okay. So why don't you speak to us a little bit more in detail about kind of

16:42

what this

16:43

looked like for you from kind of inception to actually achieving the results,

16:50

right? Talk

16:51

a little bit just now about the changes that you have had to make, but maybe

16:55

get into a

16:55

little bit even more detail about the teams that are using support logic, how

17:01

they're

17:01

using it, the workflows that they've followed, all of these things.

17:06

Sure. For us, as I mentioned before, escalations was a huge pain point for us.

17:11

We were seeing

17:11

anywhere from 10 to 13% escalations rates for our support cases. So we needed

17:17

something.

17:17

We needed some way to really begin to reduce that and throwing people at it,

17:22

throwing processes

17:23

at the time wasn't really working for us. So finding a tool like support logic

17:26

was key

17:26

to us. And for me, it's near and dear because it was the second AI tool that I

17:31

implemented

17:32

for our company. The first being a chat bot that was customer focused and I

17:35

always felt

17:36

support logic was kind of internally focused. Something that helped us drive

17:39

productivity

17:39

and efficiency for our teams, which was key for me. In finding the tool, in

17:46

using it,

17:47

in demoing, in kind of the POC, everything checked off in terms of boxes. The

17:52

biggest

17:53

challenge for us was the adoption. And it wasn't just a brand new tool. It was

17:57

the fear

17:58

of the tool, the fear of AI in general. Is this going to replace me? Is this

18:03

going to

18:03

replace me as a manager? Do I need to now look for another job?

18:09

So for period of time, it was trying to negate the fact that AI was replacing

18:15

people. This

18:16

was really just a tool that was used by people to really drive efficiency and

18:20

productivity.

18:21

So that was a key piece of it. Beyond that, it was breaking the chain of what

18:26

people were

18:26

used to doing. People had a very formulaic way of managing their cases. And

18:33

unfortunately,

18:33

because of that formulaic way, you begin to get complacent with making

18:37

improvements and

18:38

trying to inject a brand new tool into that process flow was difficult. So it

18:43

took time

18:43

to adopt, it took time for people to comprehend and recognize value. And we had

18:49

to do that

18:49

by breaking up into smaller teams. So instead of a global release, it became

18:53

smaller groups.

18:55

Groups like our number-porting team, which began to use it and adopted it whole

18:59

heartedly.

19:00

It's had massive change in that team. Before you would hear number-porting talk

19:06

about the

19:06

team, people had that grown. That grown that you wouldn't hear, but it was just

19:11

part of

19:11

them. And now it's a team that's proud of the work they've done. They've been

19:15

recognized

19:16

globally by the teams in terms of the numbers they produce. And the morale on

19:21

that team has

19:22

switched over, just based upon the usage of one tool.

19:25

How did you leverage the success of one team to -- and did you to drive

19:32

adoption among

19:34

the other teams? Because that's always -- you're always trying to get that

19:38

initial success

19:39

to get over partially the trust barrier that you're talking about. So how did

19:43

you leverage

19:44

that in that capacity and how did that go?

19:47

We actually did. Not so much with support, but aside from technical support

19:51

cases, the

19:52

other cases that tend to be problematic, as you can imagine, are billing cases.

19:57

And our

19:57

billing team began to see the drastic change in our number-porting team going

20:01

from 13%

20:02

down to 2%. And just the change in morale, the change in how they were being

20:08

spoken about

20:08

by the company, and began to say, "I think we want to take a look at this tool,

20:12

too."

20:13

So over the past several months, we've been looking at how they've been

20:16

adopting as well,

20:17

how they've been using it, how they've been applying it to monitoring their own

20:20

billing

20:21

cases also. So it's had this kind of a domino effect in terms of other teams

20:26

adopting and

20:27

really wanting to begin the user tool to drive their own success.

20:30

Excellent. What about these specific numbers that we're talking about here? 75%

20:37

reduction

20:38

in escalations. We've talked about that a little bit. But improvements in

20:41

backlog and

20:42

things like first response, where have those kind of fit in into what you've

20:46

been doing

20:47

with support logic?

20:49

So the reduction in backlog, I mean, obviously, we've had massive reduction in

20:52

terms of our

20:52

escalations. Those escalations have an impact in terms of resources and time in

20:57

looking at

20:57

other cases. And oftentimes, escalations create additional

21:02

cases down the road. So it's given us a chance to work on the massive backlog

21:08

that we had

21:08

before, reducing that considerably, streamline that process so that we really

21:14

are back to

21:14

a point where it's one case, one solution as opposed to one issue generating

21:19

multiple

21:19

cases, which is really streamlined and really made us a more efficient

21:24

organization.

21:25

So solving, in a sense, one problem has made it easier to tackle other problems

21:31

Paying attention to the one problem has allowed us to reduce the need for that

21:35

one problem

21:36

festering into multiple problems.

21:39

Awesome. Anything else that you'd like to call out just about the team's

21:46

experience?

21:48

You know, it's not just been the team, but the company itself has recognized

21:53

the value

21:53

so much so that we've partnered with support logic. It's now a part of our

21:58

product. So

21:58

integrations into our tool to not just deliver the same value that we found to

22:03

ourselves,

22:04

but deliver that same value to the customers that we have also.

22:08

The teams have been, as you can imagine, the reporting team feels like support

22:13

logic is

22:14

a bit of a godsend because it really did change their own trajectory beyond

22:18

just the

22:19

demand of being happy with performance. The employees are much happier too. And

22:23

that's

22:23

that one piece that really can't kind of quite quantify or place value on. It's

22:28

just

22:28

a happiness of those employees to come to work and to work on the cases they do

22:32

Excellent. Yeah. As far as how the team specifically is using it, right, you

22:38

talked initially

22:38

about how they're using it, the alerting capabilities, but now that you've had

22:42

them

22:43

really start to perform the actual a bit more of the kind of those recommended

22:47

workflows,

22:48

right, what does that look like? So they've actually really begin to dive into

22:54

the tool

22:55

a little more, building their own, so each teams are now looking at building

22:58

their own

22:59

specific alerts. So based upon a different segmentation of customer, how they

23:04

want the

23:05

tool to really begin to react based upon what it's seeing, alerts, words,

23:11

phrases, in order

23:11

to generate those concerns in terms of escalations or potentially can turn. So

23:16

everyone's really

23:17

been pretty key in terms of finding different ways of customizing using the

23:22

tool.

23:23

Excellent. Now, if you were to make a recommendation to any of the folks here

23:28

who are looking to

23:29

implement like kind of a proactive solution to escalations, like what would be

23:36

kind of

23:37

what are the top two or three things that they should think about? And I'm not

23:41

talking

23:41

about just what as far as like implement support logic, right, talking more

23:45

about what are

23:45

the processes that they need to implement in order to be successful in becoming

23:51

more proactive

23:52

in their approach to escalations? Well, if you were to work with a company like

23:55

support

23:55

logic, first off, I'd say, no, trust the process. Obviously, they're told they

24:00

know

24:00

what they're talking about, so that they make a suggestion. It's at least worth

24:04

considering

24:04

and maybe even trying out. Start small. Trying to implement something globally

24:09

at this scale

24:10

is going to be difficult to do. Not to mention difficult to adopt for everybody

24:15

involved.

24:15

Being small, showing a proof of concept, showing the kind of success, allows it

24:19

to begin to,

24:22

with our other teams, domino out. And that has a greater chance of not just

24:26

being successful,

24:27

but identifying gaps that you haven't thought about initially and then filling

24:30

those gaps

24:31

in as you begin to spread the scope farther and farther.

24:35

Awesome. Thanks. Yeah. Okay. So, just kind of briefly with regards to support

24:41

logic itself,

24:43

one was certainly talking about how there are some recommended workflows, right

24:48

? And

24:48

to trust the, let's say, trust the vendor a little bit when they're the expert

24:52

in something,

24:53

right? And how they recommend you should go about, especially when we're using

24:58

our platform,

24:59

right? And so, what does that typically look like with support logic? Support

25:03

logic generates

25:04

a number of distinct signals, right? I talked about the sentiment scores and

25:08

the needs

25:09

attention scores and the escalation predictions, which are a part of that. All

25:14

of these signals,

25:15

there are workflows that you can perform within the platform, right? Which are

25:19

incredibly

25:20

important. However, you can't be in another platform all the time, right? Your

25:25

teams are

25:26

inundated in all of these different tools. And so, what we always recommend is

25:32

leverage

25:32

the support logic alerting capabilities extensively, but use those to drive you

25:37

into the tool,

25:38

right? So that you can then perform those workflows that feed information back

25:42

to the AI,

25:43

as you were talking about, and ensure that there's a continuous improvement

25:47

processing

25:48

process happening, right? So, this is, it's actually relatively straightforward

25:54

, right?

25:54

Support logic points things out to you. Take a look at them, determine if

25:59

something needs

26:00

to be done to drive this towards a better outcome, and then based upon that,

26:06

right?

26:07

So, let's just move on to the next one, right? And our customers have seen

26:11

again and again

26:12

that simply by implementing some rigor around this, right? You're going to

26:16

catch cases that

26:17

would have snowballed into an escalation, right? So, that's basically, it's

26:24

kind of

26:24

as simple as that with regards to, you know, using the product. And so, maybe

26:30

let's just

26:30

open it up to a question or two. If anyone has a question for Ken about what 8x

26:35

8 is

26:35

done with support logic or proactive escalation management that they've

26:39

implemented, please.

26:42

All right. Any, yeah, please, go ahead.

26:48

[inaudible question]

26:55

So, it became clear to us during multiple QBRs, multiple sessions at the office

27:09

that escalations

27:11

was a problem. And we weren't clear why, because we felt like we had a pretty

27:16

good handle on

27:17

terms of our case management and our milestones, but nevertheless, we were

27:21

still seeing a high

27:22

rate of escalation rates. And it was coming to conferences like this where we

27:26

began to

27:27

see different tools where we discovered support logic. And it began to make

27:30

sense to us. At

27:31

the time, we had just started to scratch the surface of what something like AI

27:35

could do

27:36

for us from a customer experience perspective, from a resolution, solutions

27:41

perspective for

27:42

our customers. We hadn't really considered what it could do internally as a

27:45

tool for

27:46

us yet. And we began to kind of take a dive into what it might potentially do

27:50

and how

27:50

it could drive some productivity and efficiency. And it really seemed to make

27:53

sense. And at

27:54

that time, we didn't have any reason not to give it a try. So the team was very

27:59

open with

28:00

letting us try a POC, which, by the way, it's what I do now. With any AI tool,

28:05

I won't,

28:07

I'll be happy to take a demo. I'll talk to you for as long as you want, but

28:10

unless you

28:10

can show me what you're telling me and claiming, I'm not going to necessarily

28:14

believe anything

28:14

you have to say. So we did a POC with the tool, and it really allowed us to

28:19

visualize what

28:20

the value could bring. And that kind of sealed the deal for us. So that's how

28:24

we decided,

28:26

if we have a tool for our customers, maybe it's a good idea to have a tool for

28:29

our agents

28:29

as well.

28:30

>> You reported the interaction or as far as doing the portfolio, you are

28:36

likely to be

28:37

the many predictors in the technology, anything like that.

28:44

>> Yeah. It's actually been a bit of a pain point, to be honest. It's trying to

28:50

determine

28:51

how many predictions have resulted in escalations and trying to do that

28:54

historically. And one

28:56

of the problems we've had before is getting that data. So very early on, it was

29:01

a, it

29:01

was a, I almost call labor a love because it required so much effort from the

29:05

engineering

29:05

team on the SmartExide to deliver us the raw data and then, you know,

29:09

generating our own

29:10

reports based upon that. And I'm very happy that the tool has grown, innovated,

29:15

and now

29:16

we're much, we're able to get a lot more of that data and that those metrics

29:20

out near

29:21

real time.

29:22

>> Do you understand the leaders there?

29:24

>> Absolutely. For us, we've been on a digital transformation journey for quite

29:30

some time

29:30

now. And I've almost kind of want to say that we've also been on this separate

29:35

journey

29:35

of AI adaptation. So the work that we're doing from an AI perspective is very,

29:41

very crucial

29:42

to our continued success and support and our executives are very keen on

29:45

understanding

29:46

what we've been doing. And, you know, honestly, being able to share that with

29:50

our customers

29:50

as well, what our journey's been.

29:53

>> Any other questions? Please, go ahead.

30:00

>> [INAUDIBLE]

30:03

>> It is.

30:04

>> [INAUDIBLE]

30:07

>> I think you said from an account.

30:22

>> From an account perspective? Not quite yet. We began to think about that

30:28

when originally

30:29

the, well, not originally, but when the account health score began to pop up.

30:35

And that's partially

30:36

what's been integrated into 8x8. So it is an area of interest for us. We just

30:40

haven't

30:40

done it quite yet.

30:42

>> All right. Well, thank you, everyone. Thank you, Ken, for so much for

30:50

sharing your story

30:51

on here and being a part of this conference.

30:54

>> It's been fun.

30:55

>> Yeah. Absolutely.

30:56

>> Thank you.

30:57

>> All right.

30:58

Thanks.

30:59

[BLANK_AUDIO]