Karan Sood & Krithika Manohar 37 min

Product Keynote: Redefining Post-Sales CX with AI


Showcasing AI solutions that are redefining support: from automatic case and account summarization to intelligent case routing, precise account health scoring, and seamless voice integration. This isn’t just about improving efficiency; it’s about transforming your entire support strategy. Get ready to see how these innovations can propel your operations to new heights.



0:00

Good morning everyone. I'm also joined on stage by my colleague, Krithika, huge

0:06

round

0:06

of applause for Krithika as well, please.

0:11

All right. First, I want to extend a heartfelt thank you to all our customers

0:18

and partners

0:19

that are joining us here today. As Joe mentioned, we have an incredible lineup

0:24

of speakers and

0:27

a power-packed session agenda through the course of the day, but hopefully will

0:33

inspire

0:34

you and empower you. But more importantly, we're here to learn from you, to

0:40

listen to

0:41

you and build connections. So I highly encourage all of you guys to stay

0:46

through the day, meet

0:47

all the experts that we have from SupportLogic here, also meet some of the

0:51

industry thought

0:52

leaders that we have here as well and make best use of this conference.

0:56

It's impossible to be at a product keynote in 2024 and not talk about AI, right

1:07

? But

1:08

we're not just going to talk about AI, we're actually going to show all live

1:13

demonstrations

1:14

of all the wonderful AI stuff that we've been working over the last few months.

1:20

And

1:20

we all know live demos can sometimes go wrong, right? I mean, I always say they

1:28

work perfectly

1:29

till the time you're going to show that they work perfectly, right? So if they

1:33

go wrong,

1:34

cheer for us, clap for us, pray for us, and we'll make them work. All right.

1:42

But before

1:43

I go into the live demos, I think it's important to touch upon the evolution of

1:48

AI at SupportLogic.

1:50

Krishna talked about it. We're extremely proud of this because, you know, from

1:54

the time SupportLogic

1:55

was founded, we've been pushing the boundaries of innovation by leveraging the

2:00

latest trends in AI.

2:02

In 2017, when Google announced their first large language model, Krishna talked

2:13

about it. But

2:14

we were one of the first early adopters in the valley, at least in the startup

2:19

ecosystem,

2:20

to be leveraging bot, to build classifiers and build our aspect-based sentiment

2:26

analysis.

2:27

This is the core of what we've done from the beginning. We've heard some

2:30

wonderful stories

2:31

today from some of our customers that use some of these components. So this is

2:35

the signal

2:35

extraction engine that we already built out in 2018. In 2019, we launched our

2:42

scoring engine,

2:45

which is essentially a heuristic-based model, which we leverage for sentiment

2:51

scores, attention

2:53

scores, QA scores, account health score, and so on. And this is also sitting on

2:58

a very,

2:59

very strong foundation of some of the mathematical models that I used in

3:03

digital signal processing.

3:04

And we did this way back in 2019. In 2020, we launched our flagship

3:12

predictive engine, which predicts escalations. We heard so many wonderful

3:19

stories

3:19

of how customers have used our prediction algorithms to reduce escalations.

3:26

This was done already in

3:29

2020. In 2021, we launched our multi-model recommendation engine that we use

3:37

for case routing. And this

3:38

works in actually two modes. It works in a fully autonomous mode, where based

3:44

on many different

3:45

criteria, we're going to look at all of this in the demos. It can automatically

3:50

route agents,

3:50

or route tickets to the right agents, or it can also work in a manual mode,

3:56

where it can help

3:58

a human being pick the right agent for the right case. Now, this is interesting

4:03

, because there is

4:04

all the hype around agent KI right now. This was our foray into agent KI way

4:08

back in 2021.

4:10

Then we built the alerting framework in 2022. Last year, we introduced the

4:17

summarization engine

4:18

that we're going to look at in detail during the demos. We do summarization at

4:24

an account level.

4:25

We do summarization at a case level. We do summarization at a knowledge level.

4:30

And we're going to look at all of that in the demos as well. And of course,

4:33

under the hoods,

4:34

we use the best set-for-purpose large-language model. We use entropic cloud,

4:41

sonnet-3. We use Mistral. We use AWS Bedrock services. But again, this is out

4:48

of the experience

4:49

that we have of which model is best for what kind of a use case. And then

4:55

earlier this year,

4:56

we launched our precision rag powered answer engine. Now, this is huge, and

5:02

this definitely

5:03

needs a little bit of an explanation. We all know in complex tech support, your

5:11

simple keyword-based

5:13

searches, or even for that matter, your common rag architectures do not work

5:19

because any internet

5:22

trained embeddings will not correspond with your domain-specific corpus. So

5:28

basically,

5:28

what our answer engine does is it's an implicit answering engine that

5:33

continuously learns from the

5:34

data. It combs through all the knowledge sources. It could be dynamic sources,

5:40

sources like your

5:41

cases, like your gyro tickets, as well as static sources like your KB articles,

5:47

your publicly

5:48

available documentation on the product or whatever, and then finds the right

5:52

solutions and the right

5:53

answers for the problems that you have. So this is huge, and we're going to

5:56

talk and show you

5:57

all of this in some time as well. In summary, we have two best-in-class engines

6:04

and models

6:06

for signal detection, for predicting escalations, for recommending the right

6:13

agents,

6:14

and we are also pushing the boundaries of innovation on the cutting edge,

6:19

agent AI, as well as precision rack. Now, combination of all these AI

6:24

innovations

6:25

combined with the richness of data that we have across all post-sales

6:31

interactions,

6:33

combined with the deep understanding of the business domain that we have from a

6:38

support context

6:39

perspective with the company, as well as very thoughtful and a purposeful

6:43

approach to bring

6:45

all of that together into a unified all-in-one supposed experience platform

6:50

gives us the portfolio

6:51

that we have that you see on the screen. Again, I'm not going to go through the

6:56

slides.

6:57

We're going to very quickly go into the demos, but just a quick recap, Krishna

7:01

had it on his

7:02

presentation as well. The core is meant for support operations efficiency. It

7:08

has the sentiment analysis,

7:09

it has the escalation management engine, it has the backlog management, and

7:13

then we have the

7:14

four add-on modules. We have the assign module to basically intelligently route

7:20

the right cases

7:20

to the right agents. We have the elevate module, which is essentially our

7:25

quality monitoring and

7:26

coaching tool. We have the assist module, mainly catered towards agent

7:31

productivity. We have the

7:33

expand module that Krishna announced today, which is meant for technical

7:37

account managers,

7:38

account managers and customer success managers to get a holistic score above

7:43

the accounts.

7:45

And then of course, we have the resolve Sx, which is essentially our precision

7:50

rack,

7:50

powered answering engine, which could be used both by the customers on the

7:55

portal, on the chat

7:56

bot, but it could also be used internally by any person in the organization,

8:00

and we will have a

8:01

good demo of that as well. With that, limit of truth, demos. All right, so

8:08

there are five

8:11

demos that we would like to show you today for the five different personas. We

8:18

're going to start

8:19

with the agent, then go to the support manager, quality assurance on an auditor

8:23

, account manager,

8:24

and what we're calling as the answer seeker. Let's start with the agent. Again,

8:29

we're going to look at how an agent can review and prioritize their case

8:35

backlog,

8:37

troubleshoot a complex case, get summaries, get the right answers, search for

8:41

additional

8:42

information that they may need before they respond back to the customer, and of

8:45

course,

8:45

in the end, craft response. Let's switch over to the demo now.

8:52

So what you see on the screen right now is essentially a view of support logic

8:58

that can be embedded

8:59

within your system of record. In this case, it's a sales force. It could be a

9:04

service now. It could

9:05

be a zendesk. It could be a Microsoft Dynamics or whichever CRM do you use. So

9:10

basically, what we

9:11

are doing is while the agent is looking at cases, we are able to bring

9:16

sentiments and signals

9:18

from support logic into this UI so that they can identify which are the right

9:26

tickets that you need to be working on. For example, the attention scores give

9:31

you

9:31

which are the most urgent cases that need urgent attention. The sentiment

9:36

scores,

9:36

as well as the sentiments that you see on the top, also give you a good pulse

9:41

on what are the

9:41

different sentiments across all the different cases. You can filter, you can

9:45

start, and so on.

9:46

Let's see, I've identified as an agent one of the cases that I want to work on.

9:51

I click on this,

9:52

and then I land on the case details page within your sales force. On the right

9:57

hand side, what you

9:58

see is essentially our embeddable widget. This, again, is currently being shown

10:04

in the context of

10:05

sales force, but you can embed this widget in your CRM as well. Now, the tab

10:10

that we're focusing on

10:11

right now is called the Resolve Assist. Think of it almost like helping an

10:15

agent troubleshoot a case.

10:17

Now, whether it's a simple how-to kind of a case where an agent just needs to

10:21

look for the right

10:22

knowledge article or a solution, or it's a more complex case where the agent

10:28

really needs to

10:29

troubleshoot, Resolve Assist basically helps you do that. How do we do that?

10:35

There are three things

10:36

that are extremely important once again. First, as we said, the simple keyword

10:43

basis searches

10:43

do not work, just because the subject of the case has two keywords you cannot

10:49

always assume,

10:50

at least in complex tech support, that you'll be able to find the right

10:54

solution based on those

10:55

keywords or any other metadata. So what we essentially do is this is built to

10:59

work for

11:01

almost case as a query, as opposed to keyword as a query. So we can take from

11:07

the case summary

11:08

that you have on the top of the screen, almost derive the context of what the

11:13

case is and pass

11:14

that almost as a query to our knowledge engine. Second, then we comb through

11:21

all the knowledge

11:22

sources that it has been connected to. Again, as I said, these could be static

11:26

sources like your

11:27

KB articles, or these could be complex dynamic sources like your gyro tickets,

11:34

like your past

11:36

cases and find the right solution. So on the screen, you can also see it shows

11:41

all based on the

11:43

implicit query, which was the case summary, it's able to find the right

11:47

knowledge sources or the

11:49

past tickets. From here, I can click on any of these and I can navigate to, if

11:54

it was a case, it will

11:55

go back into your support logic system, or if it was a knowledge article, it

11:58

will go back to

11:59

wherever your knowledge article is stored. On top of that, we also actually

12:04

produced the

12:05

knowledge summary. This is again a good effective use of a large language model

12:09

. We use Mistral for

12:11

this, where we essentially take all the possible right solutions that the

12:17

system has identified

12:19

as part of the trouble shooting, and we are able to compose a knowledge summary

12:22

based on

12:24

the reference knowledge articles. On top of that, I can also go to the

12:28

knowledge search tab on the

12:30

far right, and this is where outside of the case context, as an agent, if I

12:34

have an intuition here,

12:35

I've heard about this issue before, I can just look up for something, and again

12:39

, outside of the

12:40

case context, also find something else that could be helpful in providing the

12:45

right solution to the

12:47

customer. Now, before I respond back to my customer, maybe I also want to get a

12:53

good sense of

12:55

all the latest sentiments. So this is where I go into the case and side-stab.

12:58

This is where you

12:59

see that there is an extremely low sentiment score of zero, a very high

13:02

attention score of 100,

13:04

there is a lot of negative signals, including frustration. So maybe this is an

13:08

input that I

13:08

want to use as an agent to fine-tune my response when I go and respond back to

13:14

the customer. So,

13:16

by a single click, the knowledge summary along with all the articles that were

13:22

found as possible

13:24

solutions are just as a response on the result of this step. You also have the

13:30

next steps and stuff

13:31

like that. What the agent can also do is they can attach any other files that

13:36

they may want to

13:36

attach before sending the response to the customer. They can change the ton

13:41

ality view. We have prompts,

13:44

built-in in our platform for tonality. For example, and probably I want this

13:48

response to be a little

13:49

bit more empathetic because of all the frustration that the customer has been

13:52

facing. I could also

13:54

change the language of the response in case I'm responding back to a customer

13:59

that prefers a

13:59

different language or built-in in this productivity toolkit packaged for agents

14:05

. So this was the very

14:08

first demo geared towards how do you make agents productive by helping them

14:15

troubleshoot and find

14:17

the right solutions and quickly send those responses to the customers. Second

14:23

demo. This is my favorite

14:25

one because this honestly touches pretty much everything that we've done and we

14:29

are doing as a

14:30

company. This is a support manager. So what we're going to look at in the

14:36

context of a support

14:37

manager is typically support managers are either working on escalations or

14:43

working to prevent

14:44

escalations. So in this particular demo scenario what we will see is how a

14:49

support manager monitors

14:51

the support queue, focuses on the urgent and the escalated cases, finds a case

14:58

that is likely to

14:59

escalate because it's predicted by our predictable garden, gets a summary of

15:05

the case activity because

15:07

the support manager does not in the day-to-day works of working on the case

15:11

directly,

15:12

leverages AI to determine the next best action and also marks this particular

15:19

case

15:19

for audit which is something that we will show you as well. Okay, over to the

15:24

demo.

15:26

All right, so now a support manager can either be working on our home page, we

15:32

call it the console

15:33

for all our customers that use it today or the cockpit which gives a good sense

15:39

of what's happening

15:40

for a support manager but in this particular case for demo we've configured

15:44

certain alerts using

15:45

our alerting framework in the engine that I was talking about which is pro

15:49

actively notifying

15:51

the support manager that there is a case that is likely to escalate and this

15:54

could be in Slack,

15:56

this could be in email, this could be in MS Teams or whatever communication

16:00

channel you use in your

16:01

company. So in this particular case it's showing me you know one of those cases

16:05

is likely to escalate,

16:06

I can actually take actions here in Slack itself but I'm actually going to

16:10

inspect this case in

16:12

more detail and launch directly support logic UI from here. All right, so now I

16:19

'm looking at a

16:20

case as a support manager and this is a classic omni-channel case which may

16:27

have started on the web,

16:28

maybe there were some chat interactions, looks like there is also a voice

16:33

interaction here.

16:35

Krishna was talking about our integration with telephony systems, so in this

16:40

particular case

16:42

the voice recording from any of your telephony systems is brought into our

16:49

platform and then when

16:50

we run our sentiment and signal detection we also run it on the voice

16:55

transcript itself.

16:56

So as you can see on the top there are so many different signals that have been

16:59

identified

17:00

including signals that were detected on the voice call itself.

17:07

Okay as a support manager I have a pretty good sense of you know there's a lot

17:11

of frustration

17:12

and urgency and a churn risk signals. Next maybe I'm going to get a quick

17:17

summary of what has been

17:18

happening on this particular case. So I can quickly click on the case summary.

17:25

Now again when we're

17:25

talking about case summarization in the previous demo from an agent perspective

17:30

, the persona is

17:32

agent and the main context is troubleshooting whereas for a support manager it

17:37

's less of the

17:38

troubleshooting. It's more of what should I be doing as a support manager? Do I

17:42

need to bring a

17:43

swarming team together? Do I actually need to maybe even reassign the agent? So

17:47

here I get a

17:48

very quick summary for a support manager on what is actually happening on the

17:53

case.

17:53

In addition to that I can click on the start review and what start review is

18:00

going to do is

18:01

it's again going to invoke our escalation engine and give you the reasons why

18:08

our engine thanks

18:09

this case is likely to escalate. For example this could be because of things

18:13

that are happening on

18:14

the case. For example there are 33 conversations in this particular case there

18:18

are 12 negative

18:19

signals. It could be something to do with the agent activity. For example the

18:23

agent hasn't

18:24

responded in the last few days. There are a lot of cases that agent has on his

18:29

backlog. The agent

18:31

is actually actively working on 14 other escalations and most important the

18:37

agent does not have the

18:39

right skills to be working on this particular case. Or for example customer

18:43

activity for example

18:44

this customer has had five different escalations in the last 90 days. So all

18:48

these different

18:49

contributing factors are leading our engine to predict that this case is also

18:56

likely to escalate.

18:58

What the manager can also do is as you can see at the bottom right of the

19:02

screen there is the AI

19:04

assistant recommendation where AI is recommending that maybe you don't have the

19:11

right agent working

19:12

on this particular case. And then it basically shows you certain other

19:15

recommendations of other

19:17

agents who probably have better skill match. Probably they have better time

19:21

overlap with the customer.

19:22

Maybe they have had better experience working with the same customers on the

19:26

previous escalations.

19:28

And based on many of these different contributing factors the manager can

19:31

decide to reassign the

19:34

case to the right agent. All right. Next I as a manager I have a good sense of

19:43

this now. I see that

19:46

you know the quality score is 75 and QA auditor's world. This is not considered

19:52

to be a very high

19:53

score. Maybe this is something as a support manager I want to mark for review

19:58

so that

19:59

the auditor knows that this is a case they would like they should review. What

20:05

I can also do is I

20:06

can use the share option where again on an email on a slack or whatever

20:10

communication platform you

20:12

use you can essentially tag the auditor and trigger the review of the case from

20:20

here itself.

20:20

All right. So that was the second part of the demo focused on the support

20:25

manager.

20:26

Let's go to the third one. The third is in the day in the life of a quality

20:30

assurance on an

20:31

auditor. Here I mean what we're going to see is how they can review the

20:35

compliance scores

20:36

and trends across all cases. They can conduct a quality review on a complex

20:41

case we're going to

20:42

take exactly the same case that we looked at right now which was omni-channel

20:46

it had voice.

20:46

So the auditor can actually look at the auto QA results and our QA module also

20:53

operates in two

20:54

modes. There is the fully autonomous mode so that 100% of the cases in the

20:58

tickets in your platform

21:00

are always going to be auto QA. On top of that we provide the manual mode where

21:06

an auditor can say

21:07

okay out of maybe the 10,000 cases here are the 500 that I would also like to

21:12

manually audit.

21:13

So we are actually going to see how the auditor can do a manual QA on the same

21:17

case and from here

21:19

let's now launch our elevate product which is our quality or a trauma. All

21:26

right. So we're on the

21:28

same case case number 239. It's the same complex case it's omni-channel there

21:33

is voice.

21:35

The first thing that I'm actually going to show is click off you could click on

21:38

the language please

21:39

yes please. So as you can also see the original maybe your L1L2 support was

21:45

dealing with the

21:46

customer in Japanese. So as you can see you can see the original text. You can

21:52

also if you

21:53

as Krotaka has already actually highlighted Kathy is here the agent. We are

21:58

also able to detect

22:02

and score on skills and behaviors that you have defined also on the non-English

22:08

text. What we

22:08

essentially do is we get the non-English text we translate it and then we

22:13

essentially run our review

22:15

for the agent skills and behaviors on the English text. So as you can see first

22:20

of all there is also

22:21

a review happening on non-English text. Let's switch back to English.

22:29

Let's go to the voice call or maybe before the voice call. Actually as you can

22:35

also see John Brooks

22:36

is actually the customer. Here as an auditor I'm not just reviewing the agent

22:42

for their skills

22:44

and behaviors. I'm also able to get a good sense of the customer sentiments. So

22:49

I can very easily see

22:50

through the course of the conversation the customer was angry. There was a lot

22:55

of negativity so all

22:56

our sentiment signals are also available for the auditor in our elevate product

23:02

. Let's scroll

23:03

further down and now you have the voice call. And this rainbow picture actually

23:09

is very interesting

23:09

because first of all you know you can play back the actual call from here

23:17

itself. The different

23:19

colors actually indicate all the different sentiments and signals as well as

23:24

skills and behaviors

23:25

that were detected. And this is on two things. For example we use the actual

23:31

voice call and run

23:33

acoustic models on top of the voice call to be able to detect things like was

23:37

there dead air was

23:39

a too much whole time. At the same time if there was negativity, if there was

23:43

profanity, if there was

23:44

whatever other things that were detected you would be able to see on the score

23:50

card on the

23:51

right. So on the voice call as well you have all the criteria that you could

23:56

have defined as an

23:58

auditor. These are the skills and the behaviors that you want to essentially

24:02

review for every agent.

24:04

So across the voice call for all the skills and the behaviors that can be very

24:08

easily

24:09

configured in our platform you can essentially give a rating and you can add

24:14

detailed comments

24:16

on top of the ratings. Now what I can also do is in addition to the auto QA, I

24:22

as an auditor can

24:23

actually perform a manual review and I can do that in two ways. One I can say

24:28

okay I'm going to start

24:29

with the auto QA as a score as a baseline and then enrich that further so

24:34

photography would just

24:35

select yep and just proceed to review. So now it has taken essentially the auto

24:45

QA

24:45

scores and given me as the starting point I can say okay I listened to the call

24:55

and I think this

24:55

is not positive this should be actually negative so I can make all those

24:58

changes. Cancel and please

25:00

go back to the call or what I can also do is I can start the manual review but

25:06

start from scratch.

25:08

Clean slate I don't care about what the auto QA did I just basically want to

25:12

know for the entire call

25:15

or for the entire case when it's one of the review from scratch but again it's

25:18

the same score card

25:19

you can actually configure different score cards for auto and manual and you

25:24

can go through the

25:25

entire process and I can also of course add notes. So what we've seen so far is

25:29

an auditor

25:30

looking at a complex case that had multi-language that had different type of

25:35

interactions, voice

25:38

we are able to essentially detect the sentiments and perform a review across

25:42

all of those.

25:44

The other thing that I'm going to talk about is we have built in a lot of work

25:49

flows,

25:50

assignments is just one of them. What are assignments? Assignments are a

25:55

pragmatic and a

25:56

programmatic and a consistent way for auditors to define certain rule

26:06

conditions where you may say

26:08

every time there is a voice call, every time the CES the customer effort score

26:13

is between

26:14

whatever 10 and 30, every time there is a likely to escalate signal I always

26:20

want a manual QA to

26:21

be performed. So these are just consistent ways of enforcing compliance or

26:27

enforcing that certain

26:29

cases always get polluted. From here I can actually as an auditor trigger the

26:35

review itself. So

26:37

what Krithik other was she was in the MyAssignments and she clicked on, okay I

26:41

'm just going to go

26:42

through my daily workflow and directly land on a particular case that meets

26:47

that criteria and then

26:48

I can basically perform a review directly from here and again the process is

26:52

exactly what we

26:53

already saw. The other thing that we've built in is the disputes. Now whether

26:59

it's auto QA where

27:02

we are leveraging AI to be able to grade a case, to be able to grade an agent

27:06

or even if it's a

27:08

manual QA process it's never going to be 100% accurate. There could be cases

27:15

where I as an auditor

27:16

think that for example the agent did not have a good starting whatever skills

27:21

and behaviors I

27:22

may have decided but the agent does not agree to it. So this is where we have

27:26

the built-in workflow

27:27

right the dispute workflow where now we are looking at from an auditor lens. I

27:35

can for example open

27:37

up one of the disputes and yep the far right yep open up one of the disputes

27:44

here as you can see

27:45

the auditor maybe just scroll down and then the other way please perfect. So

27:55

for example the

27:55

auditor for the opening which is let's say one of the skills that you want to

27:59

review the auditor

28:01

things the introduction was bad. The agent in this case Krithika does not agree

28:06

to that so she has

28:08

basically raised a dispute and said well the introduction was done as per

28:13

whatever our best

28:14

practices. So maybe I can re-listen to the voice and I can potentially resolve

28:20

the dispute from here

28:22

and as an auditor I can say okay maybe I agree it was my mistake. So all these

28:27

built-in workflows

28:28

are extremely important because it's not just using AI to do something we know

28:33

there are always

28:34

workflows like assignments like disputes are always going to happen and our

28:39

platform enables

28:41

all of these workflows out of the box. All right let's go into the to the

28:47

fourth part of the demo

28:48

which is for an account manager. Here we are actually going to look at you know

28:54

a strategic

28:55

account look for any kind of commercial signals these could be positive

29:00

commercial signals right

29:01

there is a renewal opportunity there is an expansion opportunity or these could

29:05

be negative

29:05

renewal signals as well for example there is a churn risk I could check the

29:10

health scores and

29:11

the contributing factors I can collaborate with the team to address some of the

29:16

high

29:17

priority cases that are associated with one of the strategic accounts that's

29:21

probably up for renewal

29:21

and then I can also you know track some of the other account trends let's go

29:28

into the demo.

29:29

So let's say an account manager is typically living in your CRM system. So

29:40

again as we were

29:43

saying before these widgets can be configured and can be enabled in any of your

29:49

CRMs for the

29:49

for the demo purposes we have a sales force. So when I'm looking at my sales

29:53

force account

29:54

I can see that it's a high value account maybe just scroll down on the on the

30:00

left on the sales

30:00

force we are critical. So let's say it's a it's a strategic account the renewal

30:05

is up

30:05

in a month it's a high value account and then on the right you see the signals

30:12

from the support

30:13

logic system where it shows you it has a fairly low account health score it has

30:21

some active

30:22

escalations there are a lot of negative sentiments and signals and then there

30:26

is churn risk. So as

30:28

an account manager this is something that I need to address. So from directly

30:32

here I can again launch

30:33

support logic and I land on what we call as the account hub. Now as soon as we

30:42

land on

30:42

the screen you can see the account summary this is again where we are using AWS

30:46

bedrock services to

30:49

to create the account summary. There are essentially three sections in the

30:53

account summary.

30:54

The first one which is the current status is essentially giving you some key

31:00

insights

31:01

grounding you to some of the facts that you need to be aware of. For example in

31:04

this particular case

31:05

there are three active escalations that require an engineering fix as an

31:10

example

31:11

or there are six production issues that have been reported and all of them seem

31:16

to have a high

31:17

rate time. These are all grounding facts that you need to know as an account

31:20

manager. The second

31:22

section is the signals and the risk indicators. These are essentially warning

31:27

signals and sentiments

31:28

coming from our signal extraction engine. For example it's showing that there

31:34

are potentially

31:35

renewal risk because there are three-choned signals that were identified in the

31:39

last 90 days

31:40

or there is a lot of frustration signals found in 30% of almost all the open

31:46

cases. The last section

31:48

is essentially the overall trends in the issues and these are essentially

31:54

I would say potential next best actions recommendations coming to the account

32:01

manager. For example in this

32:03

case you know the cases with engineering issues seem to be taking a long time

32:07

than what typically

32:08

does so maybe it's indicating that there is a lack of follow-through. All for

32:13

example customer

32:14

has a wait time that has increased 10% in the last one month which possibly

32:20

could be indicating

32:21

that there is maybe resource allocation that the account manager could be we

32:24

need to be looking at.

32:26

So basically an account summary prepares an account manager on what are some of

32:31

the reasons why the

32:33

account has a churn risk, why the account does not have a good health score and

32:39

so on. And again

32:40

we are able to do this across all different kinds of four-sales interactions.

32:45

Currently we have the

32:46

support for support data what we are also adding in the coming days is support

32:51

for customer success

32:52

interactions, support for any of the onboarding tickets that we could also be

32:57

contributing towards

32:58

the overall account health score. Now if you scroll down you have a trend line

33:03

for the account health

33:05

score showing you over the last three months, six months, nine months how has

33:10

the account score

33:13

being trending. Now here again for the account score we used a heuristic model

33:19

that takes into account

33:20

many many different things to be able to calculate the account score. For

33:24

example if there are dips

33:26

or spikes in the cases if there are too many escalations what's the severity of

33:31

the different cases

33:32

or what's the quality of the service that we've been providing, what are the

33:35

different kind of

33:36

sentiments that may have been detected over a period of time. So all these

33:39

different contributing

33:41

factors holistically go into the heuristic model and come up with an account

33:45

health score. What I

33:47

can also do from this screen is I can drill down into the data for example I

33:51

see there are three

33:52

active enclosed escalations by clicking on that tab I have exactly those three

33:57

interactions

33:58

that are contributing towards that. I can actually for example the first one is

34:03

an escalated case

34:05

that has a potential churn risk. I can actually directly drill down as an

34:10

account manager directly

34:13

into that particular case again potentially use the get summary to get a very

34:18

quick overview of

34:20

what's potentially happening with this particular case and then maybe

34:23

collaborate with the support

34:26

manager or maybe directly with the agent to help resolve the issue. The other

34:33

thing that we can

34:35

also do is maybe just click on 16 negative signals perfect scroll down and you

34:41

can see now all the

34:42

16 interactions that have negative signals some of them also have churn risks

34:48

and I can exactly do

34:49

the same what we did before from here I can go specifically to that particular

34:52

interaction

34:53

and see what's going on. So in a nutshell the account hunt provides a holistic

35:01

overview to an

35:02

account manager on what's really happening at the account. It could be quant

35:08

ified within account

35:09

health score which is taking into account interactions across all different

35:14

four sales teams not to

35:16

support going forward customer success interactions and on boarding tickets as

35:21

well and gives you a

35:22

quick summary through the account summarization that we've built in for them to

35:26

get a quick

35:27

glimpse of what's potentially going wrong. Let's quickly also look at maybe the

35:33

positive commercial

35:33

signals right. For example in this case I've configured an email alert that is

35:39

picking on a

35:40

positive commercial signal. If I click on view in support logic I can directly

35:47

from here

35:48

go into a case. Now this is again very important and something that we all know

35:53

in the support

35:54

industry. Sometimes there is so much of this information and knowledge sitting

35:59

in the support

36:00

cases that never gets to the account managers. For example in this particular

36:04

case there is a

36:06

potential renewal opportunity that the customer was talking about in the

36:11

context of a case

36:12

and we are able to pick on some of these signals could be renewal could be

36:16

expansion

36:17

opportunities and then throughout alerting framework notify the account manager

36:23

or anyone

36:24

else who needs to be notified. With that we come to the last part of the demo

36:33

which is for

36:35

the answer seeker and this is where I would like to invite on stage my

36:40

colleague Sario

36:42

to come and talk about the answer seeker and more broadly the knowledge co-p

36:47

ilot. But before that

36:49

Pratika there were no claps, there were no cheers it looks like our prayers

36:54

worked. These are all

36:56

life demos my friends and please spend as much time as you want with us we're

37:05

going to be at the

37:06

demo booth. There are a lot of experts that we have here we have Kratika we

37:11

have Pali we have

37:11

Head of ML Alex we all have sessions here but we would also love to spend some

37:18

time with each

37:19

of you so please feel to do so. Thank you so much. Sario over to you.