Ryan Radcliff & Sariel Moshe & Anna Shtok 43 min

Ep 9: Knowledge Copilot: The Future of Answer Engines with Precision RAG


Precision RAG is more than a buzzword. It’s a groundbreaking approach that quickly delivers accurate answers to queries from both your customers and internal teams. So what is precision RAG exactly? And how does it deliver the quickest, most accurate answers from your data? Learn how this innovative technology reduces support cases, minimizes escalations, and accelerates case resolutions. Key Topics - Why Precision RAG stands out as the most accurate solution for surfacing answers - The design philosophy behind xFind - The tangible benefits these choices bring to end users We will also cover integrations, including how your data sources are connected to the answer engine, and how you can start using Precision RAG.



0:00

So you can see here what we're going to be talking about today is how answer

0:05

engines have changed,

0:06

how knowledge copilates have changed, how expectations from your customers,

0:12

from your employees has changed, and what we've built to meet those challenges.

0:17

This is kind of just an overview of the agenda here,

0:20

where this is what we're really going to talk about,

0:23

is what that ideal solution looks like.

0:26

We're going to get very deep and technical with what precision RAG is,

0:29

which is probably why you came.

0:31

We're going to look at how it works,

0:33

and we're going to look at what this looks like inside support logic.

0:37

So kind of starting off, expectations for both your support teams and your

0:44

customers are changing.

0:46

As technology improves, as things get more complex,

0:52

you can see here that your support teams need embedded solutions in the places

0:56

that they already are working.

0:58

The CRM, the Slack space, and your customers want really great self-service.

1:05

They want to be able to take their questions in the way that they propose them

1:10

and get immediate answers back.

1:12

No longer are customers looking for a list of search results to come through,

1:18

wondering if the right answer is there or not.

1:21

The market has really changed on these expectations of what both

1:25

agents and customers are looking for.

1:27

Wouldn't you say, Sargil?

1:29

Yeah, absolutely.

1:31

I think once the chat you could take him out,

1:34

and people understood the power of AI, of Gen AI,

1:40

I think the way the immediate consequences of that was understanding

1:46

that customer support is going to change,

1:49

and customers are expecting a better experience in finding the knowledge they

1:56

want.

1:56

I think it's been an interesting two years since

1:59

in how that played out and where the technology is going in enabling that

2:06

experience,

2:07

specifically on the more complex side of customer support.

2:11

Awesome take on that.

2:14

One thing I forgot to talk about at the top of the call,

2:17

we're running a contest on these webinars.

2:19

If you've been with us before, you know about this contest.

2:22

When a GIF shows up on the screen in the chat,

2:26

the first person to name the movie that this GIF is from

2:29

is going to get an Amazon GIF card from us.

2:31

So this is a point in the webinar where we're going to pause for a quick moment

2:35

because I know what's coming up.

2:36

And I invite you to name that GIF.

2:38

We have a bit of a science theme today because we're talking to Anna,

2:43

who is an awesome scientist, and so all of these are going to be science based,

2:46

just as a little hint.

2:47

So here's our first one, and I'm going to look to the chat and see

2:52

who can name.

2:53

There we go, Chris, on it like you knew it before it came up.

2:57

I'm amazed.

2:58

That was so fast.

2:59

Nice job.

3:00

I don't think I can type that fast.

3:02

That's great.

3:03

So let me click away on that and we can kind of get back to your regular

3:06

schedule program.

3:07

So now let's talk about looking at the challenges and what the ideal solution

3:12

is

3:12

from these challenges, right?

3:13

We know that from these messy data sources, we want

3:20

immediate answers where folks are already working.

3:23

We want them to be asked by technical experts that know what they're looking

3:28

for,

3:28

but also customers.

3:29

And we want all that to happen from support cases, from bug tracking software

3:35

like Jira.

3:36

Sorry, is this right?

3:38

Is this kind of how you approached solving this originally?

3:42

Is this kind of the problem statement?

3:43

Yeah, absolutely.

3:45

So I noted that really where the game is still really being played is in the

3:53

more complex

3:54

scenarios of customer support.

3:56

The reason is in those scenarios, it's not just a simple list of frequently

4:03

asked questions

4:04

that need to be solved.

4:05

It's usually very intricate issues dealing with multiple options of different

4:11

products and versions.

4:14

And the customers don't necessarily know what they're searching for, how to

4:19

keyword their way to it.

4:20

Even technical experts will be having a hard time and usually they're doing it

4:26

as part of

4:26

solving the issue in the case.

4:28

And the main knowledge sources, from what we've been seeing in many of these

4:33

companies,

4:34

the main knowledge sources are not nicely organized article knowledge bases,

4:40

but rather

4:41

all the past cases that the company has already dealt with.

4:44

That's the main resource that can really bring up answers to these questions

4:51

and possibly next

4:52

steps.

4:52

So past cases, Jira, all the areas where work has been done on these issues.

4:57

And that's really what's setting apart the serious technology that can really

5:05

take Gen AI

5:07

and enable it in an enterprise setting from many other solutions and vendors

5:13

out there

5:13

that don't really know how to deal with that level of complexity.

5:19

That makes a lot of sense.

5:21

At this point, what I want to do is throw a few polls at the audience.

5:26

So we're going to launch poll number one right now about the knowledge base

5:30

that you have

5:33

existing for your team. Do you have a dedicated knowledge base for your

5:37

employees right now?

5:38

And we're kind of looking at the answers as they come in.

5:43

And it looks like it's a mix.

5:45

Some teams have a dedicated knowledge base, some don't.

5:49

So it sounds like there's a real possibility there for teams to leverage messy

5:55

data,

5:56

build knowledge base articles off of success, but they can get started with

6:01

precision

6:01

lag without having that knowledge base completely formed and completely ready.

6:05

So another poll I want to kind of launch after this is,

6:10

does your current knowledge base use AI to generate answers?

6:14

Is this a knowledge base with a simple search on top?

6:17

Is this a knowledge base that's using an AI component to it?

6:22

I'd love to hear about that as well.

6:24

So it sounds like no is kind of winning on that side.

6:28

It sounds like folks are using knowledge base systems.

6:31

Well, it seems like actually kind of a tie now that more votes are coming in.

6:35

So a third question for you, we're going to get all the polls kind of wrapped

6:37

up in one swoop.

6:38

Are you using the information in your support tickets as a knowledge source?

6:44

That's our third poll.

6:46

And I'd love to kind of see how the room feels about that one as well.

6:56

And it seems like it's pretty split.

6:58

So that's interesting.

7:00

So it looked like most of our viewers have a dedicated knowledge base.

7:04

Most of them are not using AI to generate answers.

7:09

And it's a tie in using support tickets as an information source.

7:16

That's really interesting information.

7:18

So before we kind of get into what precision lag is,

7:22

we're going to just pause for a moment and talk about a customer

7:26

who's having great success with it.

7:27

As you can see here, Vanit Puri from C-Vent is using this knowledge co-pilot

7:32

to help his team find the right answers and finding great success with it.

7:37

So now let's kind of get into what precision lag looks like.

7:40

And here's kind of a quick animation of an overview of this.

7:44

So you can see here, we start with a support case.

7:47

And we're going to talk about why precision lag is this ideal solution

7:52

and kind of get in depth with it.

7:55

So, Sariel, can you talk to us about what this is

7:58

and then lead us into the more technical overview?

8:01

Sure.

8:03

So what we're seeing here is on the left-hand side is a support case.

8:09

And again, in the more complex scenarios,

8:12

usually these are landscape descriptions that can go on for quite a few

8:17

comments

8:17

and internal comments on trying to figure out what's going on.

8:22

And that's a lot of information right there that can be used already as a query

8:30

as an issue that needs to be solved.

8:32

And then what we do in our implementation of a precision lag in X-line,

8:40

announced support logic is first of all, we would like to summarize the details

8:45

of the case.

8:46

So figure out what exactly is going on there

8:49

and really make it very easy for the engine and for the agent at the same time

8:54

to be able to use this information that already exists there implicitly as a

9:00

query.

9:02

That's the first step.

9:02

The second step is querying all the relevant data sources.

9:07

So as I noted, that's not only going to be a knowledge base,

9:09

it's going to be all the past cases and all the GIR tickets or whatever other

9:14

sources are out there

9:15

that hold a lot of relevant knowledge to dealing with this case.

9:19

And that really, this is something that is not obvious to many people,

9:26

really implementing Gen.E.I. in an enterprise setting really comes down to how

9:31

good your search engine

9:33

is rather than necessarily how good the large language model can then provide

9:38

an answer.

9:39

And really the search engine is a critical element in being able to retrieve

9:44

the specifically

9:45

relevant items and specifically relevant parts of those items to feed to a

9:50

large language model

9:51

to generate an answer.

9:52

And then once we've done that, then we can generate an answer.

9:55

And this is where I'll hand it over to Ana to really go more in depth and

10:00

explain

10:00

how we built that, what's going on in the engine that we built that enables

10:08

this precision

10:09

rack to happen. Okay, so RAG is essentially, as you probably know, retrieval of

10:17

maintenance

10:17

generation, where retrieval comes as a critical part of this process. Because

10:26

if we are giving

10:27

to their larger language model, to the generative model, an incorrect piece of

10:33

text to answer from,

10:35

then the answer, of course, is not going to be correct. And it will hallucinate

10:41

in sometimes in

10:43

very hilarious ways. So our main objective here is to get the most accurate

10:50

piece of text

10:52

for the retrieval, for the generative model. So we're starting by ingesting all

11:02

the different

11:03

knowledge sources in the company, where I have a very, very flexible ingestion

11:09

pipeline,

11:10

which allows us not only process the data and clean it, but also model it

11:15

because in an enterprise,

11:18

categories, different products, different versions, they mean a lot. An answer

11:23

could be

11:24

completely different if I'm talking about product A or if I'm talking about

11:28

product B.

11:29

So we have a very, very flexible modeling procedure that allows us to

11:34

fit ourselves to any type of data, any source, and build for us an index to

11:45

work with. So

11:47

after we process the data in index, we have a very high quality search that is

11:56

built on multiple

11:59

search engines that together form a solid search result. And we will, in the

12:08

next slide, compare

12:11

ourselves to the state of the art. And then before we are going to answer based

12:17

on what we found,

12:18

we are going through a guard rental model, which is a proprietary technology of

12:23

support logic,

12:25

where we essentially check whether what we have found the most similar

12:31

documents are indeed

12:33

have the information that we are trying to answer on because often in knowledge

12:43

bases in

12:44

an enterprise, some data is missing. And this is why, by the way, the tickets

12:49

are very useful

12:50

because tickets are often contained much more relevant and up to date data than

12:56

the static sources.

12:58

So before we are going to generative LML, we are indeed testing that the data

13:06

contains the answer

13:08

that with a high probability, we have a chance to give a correct answer. And

13:13

only then we are

13:14

sending this piece of text or several pieces of text that we have found with

13:20

the question that

13:21

the user asked and asking the generative LML to generate an answer based on

13:26

what we have found

13:27

and validated. After the user gets the answer and the reference to the piece of

13:35

text to those

13:36

documents from the company knowledge, so he could verify that indeed we have

13:45

fetched the correct

13:47

data so he could validate what the generative LML gave him. Then he could

13:54

feedback and give us some

13:58

additional way to improve ourselves, to improve the search, to improve the

13:59

guardways.

14:04

So the system could improve over time. So this is the basic structure of

14:15

precision reg,

14:17

which is emphasis on search quality and on verification and feedback.

14:28

That makes a lot of sense. It is coming in, you are processing the language

14:31

from the query,

14:32

you have guard rails, you have filtering going on with the knowledge sources,

14:36

you have this

14:37

feedback loop that is making everything improve over time. It is a very

14:43

exciting system. And I

14:44

know in a second you are going to get into some results when you tested this

14:48

against a very popular

14:49

alternative. Before we do that, we have got another competition here. I would

14:56

love to see if

14:56

folks know what this one is from. There is another scientist looking shocked.

15:01

It is my theme for the

15:02

day. Zach has got it. Jurassic Park, good job. I can click that off the screen.

15:09

Anna, I would love to hear about this study against OpenAI and the success that

15:15

you saw

15:16

with the precision reg engine that you built.

15:20

So here we wanted to eventually test to evaluate our search against one of the

15:27

most popular state

15:29

of the art embedding models, which is available on the market, which is OpenAI

15:37

ad embeddings,

15:40

and which is indeed quite powerful way to provide search abilities. So we

15:50

compared them on four

15:52

core collections. We created those collections in order to test the search

15:59

engine you need.

16:00

A set of queries and a set of documents that are relevant to those queries.

16:05

So we have used a technique also used by Google to create a benchmark where we

16:13

take a document

16:14

from a collection, we will sample a passage from this document, and then we

16:19

create a question from

16:21

this passage. And now we are doing the reverse engineering. We are trying to

16:25

find, given that

16:27

question, the document that it was originated from. So this document is the

16:32

relevant document we are

16:34

searching for. Now we have here four different collections. The first two,

16:41

Ember and Outlabs, these are based on the knowledge bases of companies, which

16:49

have data

16:50

that is quite common, not very complex, not very narrow, specific lingo, and

17:01

the last two ones

17:04

eight by eight and Waters, these are two companies that have

17:08

like a more sophisticated knowledge that is less common to the overall average

17:16

information on the web. The reason that this is important is because the ADA

17:23

embedding or any

17:25

other embedding that you can find available is trained on data that is scraped

17:31

from the internet.

17:32

So it is familiar with the lingo that is common, that is can be found

17:37

everywhere. But as you go

17:38

into the enterprise, you find more and more specific lingo, specific usage

17:44

combinations of terms.

17:46

And this makes much harder to use a pre-trained embeddings to get an accurate

17:54

search. And you

17:55

can see it first of all in the ADA column that the Successive Five, the ability

18:03

to find

18:04

the top five documents, the correct document, drops as you go to the more

18:10

sophisticated,

18:11

more narrow, more specific types of text. So here we compare a die against of

18:22

our Successive Five,

18:24

and we see stable difference, large, and the differences are getting larger,

18:33

the data becomes

18:34

more complex because we are using an ensemble of search methods and not only

18:41

one search method.

18:43

And our modeling of data is much, much more flexible than something that you

18:48

can get

18:49

on the internet out of the box. And this indeed, so we can see here a success,

18:55

and we see the complementary, the failure rates. So this is a crucial

19:01

difference because as you give

19:05

to the generative language model, the one that you want to generate an answer,

19:09

the correct piece of

19:10

text, the more likely you are to get a correct, clean answer. So this is

19:18

basically the benchmark.

19:22

Thank you, Anna. That's really interesting to hear about this technology and

19:28

how well it's working.

19:29

Now that there's this engine behind this knowledge co-pilot, it's really

19:35

interesting to kind of look

19:36

at the spectrum of folks that this can help, right? Sorry, could you walk us

19:41

through all of these

19:42

use cases? And I know in just a moment, we're going to get into what these

19:45

actually look like

19:47

in the product. But if you could kind of show us how every one of these groups

19:51

in an organization

19:52

benefits, I think that'd be great. Sure. So once you have a precise enough

19:58

engine,

20:00

really it can provide value across the entire enterprise. And starting from the

20:06

left,

20:07

we're starting with the case flow. So the end customer, the one who really

20:13

needs the solution,

20:15

when they're coming in with their issue, whether it's in the portal or in a

20:18

chatbot scenario,

20:19

we want to be able to provide them with the answer right then and there based

20:23

on their

20:23

description of the issue. And then the support agent and the support manager,

20:28

dealing with

20:30

complex issues, enabling them to solve them much quicker, reduce escalations.

20:37

Many

20:37

escalations happen just because no one knows that this has already been

20:41

escalated in the past.

20:42

Once you're able to find that JIRA, historical JIRA, and provide that

20:47

information, it can remove a

20:49

lot of those. And of course, the support aim in general, which is in general to

20:56

reduce case

20:57

volume and avoid lengthy cases unnecessarily. But then moving on to other areas

21:04

of the enterprise.

21:05

So in customer success, right, a lot of the same issues come up, whereas you

21:11

want to be able to

21:12

deal with new and new coming customer requests and issues based on historical

21:20

work that you've

21:21

done with other customers. So you want to be able to retrieve that information

21:25

precisely

21:27

based on a context of an account. In IT, very similar to support, you're trying

21:33

to solve internal

21:34

issues and you want to be able to retrieve what's been done with them in the

21:39

past. And even in

21:40

product and engineering, right, if we're dealing with JIRA or other types of

21:48

issue

21:50

systems. So when issues come up, when development questions arise, you want to

21:58

be able to really

21:59

to know what's been done in the past, how it's been approached. All that

22:04

information to get

22:05

existing enterprise is many times not very easy, not very efficient to reach

22:12

and to find answers

22:14

based on. And that's what a very precise engine can enable. Awesome. Thanks for

22:21

that. Now, let's

22:22

let's dive into what this looks like across the platform, right? There's these

22:30

four different

22:31

paths to these precise answers to being able to leverage this technology that's

22:35

working so well.

22:37

Sorry, could you lead us through these? And I'll play the slide jockey here and

22:44

we can look at these

22:44

four areas and how they help out different groups. Sure. So it's important to

22:49

note that the same

22:50

engine that out of the scrap earlier, the exact same engine can really power

22:55

all these different

22:56

experiences. And that's one of the powers is you don't need to build separate

23:01

engines to power

23:02

each of these. The exact same engine can connect to all the relevant knowledge

23:08

sources and appropriately

23:10

provide answers to the relevant people, the relevant users, where they are as

23:16

they need them.

23:17

So starting with the portal assist, I think we already started going through

23:23

the slide. So the

23:24

portal assist really that the idea here is all companies today, I think have

23:29

some level of

23:30

a search bar or similar experience in their site enabling customers to solve

23:40

their own issues.

23:41

But the reality today is still even with Gen AI out there, most companies I'm

23:48

seeing are still

23:49

relying on a very cured based approach to that experience. And really, the

23:56

better way forward

23:59

in providing the relevant knowledge and the relevant value to your customers is

24:04

providing

24:05

them the answers when they need them as they're describing them. And this can

24:09

be in the portal,

24:10

this can by the way can also be in the case form as they're describing their

24:13

issue if you already

24:13

got there. You want to not only bring them back a relevant list based on

24:20

knowledge, based on

24:20

keyword searches, you want to enable them to describe their issue, bring back

24:25

those items,

24:26

but then be able to provide an actual answer and be sure that that answer is

24:30

going to be accurate,

24:31

it's going to be precise, it's going to be relevant to what they're asking. So

24:35

that's the portal side

24:37

of things. If you move over to the chatbot side of things, so again, the same

24:41

engine, if you want

24:43

to enable a chatbot experience, that's the next slide over. So definitely,

24:50

sorry, we have a question,

24:51

I just wanted to, while we're still on the portal slide, Chris Raul wants, does

24:56

XFind host in a

24:57

standard portal or can it integrate into a customer portal like Salesforce

25:02

Community?

25:03

So thanks for the question. So absolutely, we can integrate into Salesforce

25:08

Community, we have

25:10

a Salesforce specific widget for that, and we can connect in addition to that

25:15

to pretty much any

25:18

search bar out there, we're an API based engine, so any search bar can really

25:24

connect to us and

25:26

get back the relevant answers. And that's actually a good segue to the chatbot

25:32

because

25:33

the chatbots today, many of them are still built for a close set of issues to

25:41

be able

25:42

for them to solve. But really, if you're having this chat type of flow with

25:48

your customers,

25:49

best thing would be as they're describing their issue in the chat to right

25:54

there and then

25:54

then they're solve that issue, right? Not only bring them back possible items

25:59

and have them

26:00

have to go and open and redo them, but just bring them the relevant information

26:04

right then and there.

26:05

So that's another really type of experience that this engine can enable and

26:12

again,

26:13

and the more complex types of issues as well.

26:16

That's great to hear. Chris, another question about,

26:20

can XFind integrate into a case creation workflow and help deflect a case which

26:27

a customer has

26:28

opened? Yes, yes, absolutely. Yes. So that's another integration, not getting

26:36

into the specifics of

26:37

the user interface, but reading as a, for example, as the customer is typing

26:42

their issue,

26:43

reading that information, sending that as a query and providing back an answer.

26:48

That's another type

26:48

of experience that the same engine can provide. Awesome, thank you. Coming up

26:57

on this next slide,

26:58

we have our third GIF and we're looking at the chat to see who knows it. I'd

27:03

love a new,

27:03

I think we're going to try to limit folks to just winning one GIF card. So if

27:08

anybody else on the

27:09

call knows this one, looks like Eliza has it. I hope I said your name right.

27:15

Oppenheimer, nice job.

27:17

All right, we can get back to get back to business. So this is the slack one.

27:22

Yeah, tell us all about

27:23

this, Sartil. Yes, so actually what we've been seeing again over the past few

27:32

years is a shift

27:34

in the way specifically support teams, but not only, I think it's enterprise-

27:38

wide,

27:39

in the way they really ask questions and try to find answers. Whereas rather

27:46

than search a lot

27:47

of times, they'll just go to any slack channel that they haven't asked the

27:50

question there and hope

27:52

the team will answer that. Well, why not, as they're doing that, instead of

27:56

them having to go to a

27:59

slack channel and wait for a response, you already have all that knowledge in

28:02

place across the

28:03

enterprise, across all your knowledge sources. Why not power that experience

28:07

with the exact same

28:09

engine I was describing? I can actually show this in live, this is on support

28:16

logic data.

28:18

Stop sharing and then you can share. Okay, so this is a slack app we're working

28:27

with

28:28

in support logic. You just have to share your screen first. It's up in the top

28:32

left corner.

28:33

Yeah, so if I'll just type in a question again, this can be as complexity as

28:40

you would like, but

28:44

xfind and then the question press enter and then wait a few seconds and I'll

28:51

just come back with

28:52

an answer. What's interesting here is that the great answers to the question,

28:58

but it's relying

28:59

here on past cases. So that's really interesting because it doesn't have to

29:04

necessarily work with

29:05

a knowledge base, with a nicely organized knowledge base. What we did here is

29:11

we took

29:12

past cases, indexed them and they're powering the answer. So just think how

29:16

powerful that is to

29:17

be able to quickly answer questions coming in in any channel from anywhere with

29:22

all the

29:23

knowledge you already have in place. It really makes the whole process so much

29:27

quicker and so

29:28

more efficient. That's so exciting. So you've got three different channels for

29:35

customers

29:37

to help serve themselves, right? Deflect cases, have fewer cases. You've got

29:41

chat,

29:42

you've got this portal space that employees can use as well. And what's

29:45

exciting about these,

29:46

and this is a question we had earlier in the week, is that you can take

29:50

knowledge sources,

29:52

like Atlassian Jira Confluence, and you can limit down with the employees

29:58

versus the customers

30:00

see. So the employees can see the whole breadth of material out of these

30:03

sources and then customers

30:04

can see a subsection of it. So it's not just off and on per source, which is

30:09

really cool.

30:09

Here I'm going to share my screen. We can get into the fourth channel, which is

30:15

a very exciting

30:16

channel. This is the Salesforce integration, CRM integration.

30:20

Right. So what we're doing here is we're integrating our existing widget into

30:26

this

30:26

Portogix Salesforce widget. And so what we're doing here is a three-tiered

30:33

experience for the agent

30:35

as they're solving the case, taking away a lot of what's taking agents time

30:40

today, which is

30:41

trying to search for knowledge and then read through the knowledge and try to

30:45

figure out how

30:46

that can help them on the case. So we're really doing a lot of the work for

30:50

them.

30:52

First step is summarize the case automatically. So take all the information in

30:57

the case,

30:58

subject description, comments, metadata, and automatically summarize that into

31:03

a nicely

31:04

organized summary of what the problem is, what the symptoms are, what the next

31:09

steps have already

31:10

been discussed. And then based on that summary and all the information in the

31:15

case, retrieve the

31:16

relevant items much like we did in the portal. So retrieve all the relevant

31:20

items

31:22

to the case. Again, our articles, past cases, Jira, whatever, whatever could be

31:29

relevant.

31:30

Again, that's happening automatically. So now the user did not have to do

31:34

anything, the agent

31:34

did not have to type any query, did not have to do anything, automatically

31:37

summarize the case,

31:39

automatically retrieve the relevant items. And then the third step, which is

31:45

the most exciting,

31:46

is summarizing all that knowledge into either what the next steps based on past

31:53

cases, for example,

31:54

could be taken, or what the actual solution is, right, that has been already

32:01

noted.

32:02

So again, summarize the case, retrieve all the relevant information, and then

32:08

summarize that information into what knowledge the agent needs to know right

32:11

now as he's working

32:12

as he or she are working on the case, taking a lot of that load away from the

32:18

agents and

32:18

enabling them just to work the case and provide the best experience of the

32:23

customer.

32:24

Oh, we've got a couple of questions from the listeners that I think would be

32:26

great follow-up

32:27

to what you're showing. Can Slack itself become a knowledge source from Slack

32:32

channels?

32:33

Absolutely. Yes. So we can take channels that are dealing with certain topics

32:41

and index them as another knowledge source. So that is absolutely an option in

32:46

Slack or Teams,

32:47

of course. Awesome. And then another question that we have here from Zach,

32:50

tickets over time, information that gets stale, how does information that's no

32:58

longer relevant

32:59

get weeded out of what the system is looking at? Yeah, that's another really

33:05

great question.

33:06

So there's two ways when we approach that, and maybe Anna can add on if you

33:13

have any other thoughts.

33:14

First is, so time itself can be a signal. So if the further on you go away from

33:23

that past case,

33:25

we can use that as a signal to it being less relevant, much like the text

33:29

itself is a signal,

33:31

two relevance, the time can be a signal as well. That's one approach. The

33:38

second approach is a bit

33:39

more specific to a company, right? Depending on the speed at which the product

33:51

changes and issues

33:51

come up, we can adapt that signal according to the company specific type of

34:00

data.

34:01

So for example, some companies, their product doesn't change that often,

34:06

maybe once every few months, they'll throw in an update. Other companies is

34:11

changing on a daily

34:12

basis, right? So you want to be able to adapt the way you use time according to

34:18

how that company is

34:20

working. That's great. Sorry, Anna. Yeah, another thing that we suspect to

34:30

updating. As I mentioned before, we have a very, very flexible ways to ingest

34:36

data.

34:36

And we are actually trying to index only data that is relevant. For example, if

34:43

you have two teams,

34:45

and the system is like a new hub, of course, in the system tickets of several

34:51

teams, but you are

34:52

showing the specific team is working with support logic, then we could limit

34:57

the data we index

34:59

only to the tickets of that team. Or for example, if you have a product that is

35:05

got deprecated and

35:07

no longer supported and no longer exists, you get just say us, okay, this is

35:13

this product no longer,

35:15

we want the previous tickets from it in the system because it will just, you

35:21

know, it's irrelevant,

35:22

no one will stop supporting it. So the system, the ingestion problem is built

35:28

in a way.

35:29

It's very, very easily customizable and adjustable. And the period of time we

35:35

are looking back

35:36

could be very easily changed, even conditionally, like for certain product, we

35:42

take three years back

35:44

for another product to look at two months back. So we are very flexible about

35:50

it. And as I mentioned,

35:51

we could also prioritize over time. Again, this is this is a bit more complex

35:59

because the time

36:00

and the relevance they play some weight in the final ranking. And but but most

36:09

of our customers really

36:11

doing great with the filters and and the deprecation they define.

36:20

That's awesome. Thank you both for for jumping into these questions so much.

36:25

And I think we

36:25

answered Chris's last question about unconventional data sources. It sounds

36:29

like that's a yes on

36:31

ingesting non standard knowledge sources. There's a question here from Eliza.

36:36

And I think it's a

36:36

perfect transition into our next slide. She asks, how many Salesforce agent

36:41

assist features do you

36:42

offer from this list? So one of the big goals for support logic is to be the

36:47

complete tech

36:48

consolidation of everything a support operations team needs. It's one of our

36:52

main goals for for

36:53

the enterprise, right? For for large companies for medium sized companies that

36:58

are very serious

36:59

about their support experience. This is the suite of use cases that we're

37:03

really solving for.

37:04

I know there's a lot on this screen. You can see across here that we're

37:09

tackling the major high

37:11

value elements in support operations efficiency. We've got a full suite of

37:15

quality monitoring.

37:16

And then in our agent productivity area, we are hitting on many of these same

37:21

things of

37:21

summarizing a case. And of course, we can update a case in our platform and in

37:26

our plugin.

37:27

Answering questions with knowledge is a lot of what we're talking about today.

37:31

querying records, of course. I'm kind of just looking at Eliza's question

37:36

answering these

37:37

and identifying the metadata out of the case. What's so exciting is it's really

37:42

an intelligence

37:43

layer that sits on top of your CRM. So all of your data is pulled out. And then

37:48

the sentiment and

37:50

the products and the custom fields that you have in your CRM are all brought

37:56

into support logic and

37:57

used for analytics and for looking at trends in your data and everything like

38:02

that. So it's kind of a

38:04

real department, the cliche marketing term. It's a real supercharge for what

38:08

you have now

38:09

in your CRM. And what's great about it too is that it's working now for real

38:13

customers.

38:13

This is all stuff that works today. It's real world results that you can see on

38:18

our website.

38:18

So I hope that answers your question. And it kind of gives me a way to look at

38:22

this slide

38:23

and what we're presenting now with this new knowledge co-pilot. You can see

38:27

here how it shows up for

38:29

agents. It shows up for support operations. And then also in this new space for

38:34

us, which is

38:35

customer self service with these portals that Sariel showed us. Just going to

38:41

check the chat

38:41

real quick in the Q&A before we move on too quickly. So folks, getting into one

38:46

of our final slides

38:47

here, you can see here that the knowledge co-pilot is really an enterprise

38:52

grade solution built for

38:53

both support teams and customers. And you can see that this is why. Right? This

38:59

is a precision

38:59

rag engine that is built for the support domain for complex B2B support and

39:06

implemented successfully.

39:07

If you go out online, you're going to see that there's some precision rag do-it

39:11

-yourself

39:11

options out there that won't get you the same results that this does. It also

39:17

helps you leverage

39:18

the messy data that you have. So you don't have to come in with a completely

39:22

formed knowledge base

39:23

of articles that are being maintained by a team. You can leverage these

39:27

different inputs,

39:28

which is exciting. It also gives you very robust answers. So it's beyond just a

39:34

list of

39:35

search results that you have to kind of look through and determine whether or

39:40

not there's

39:41

anything right in there. One of my favorite parts about this is that it will

39:44

tell you if there is

39:45

no answer, which I think goes a long way towards establishing trust with the

39:50

people who are using

39:51

these systems. I'm sure we've all encountered search systems over the last 20

39:56

years that we

39:56

don't tend to use a whole lot because they don't actually give you good results

40:00

and so you can't

40:01

rely on them. Of course, Google will always give you a great result, but a lot

40:05

of homegrown

40:06

systems, a lot of internal systems aren't built with that same kind of resili

40:11

ency and always bringing

40:13

together a great answer. And then on top of all that is back to that tech

40:17

consolidation piece,

40:18

where this is really removing the strain on your IT resources. This is a

40:24

business partnership. This

40:25

is a fully managed solution that's part of the support logic platform. Sorry,

40:29

you'll

40:30

do you have anything to add on these? I saw, Cheer, last point. I think one of

40:37

the really

40:38

points we've been hearing a lot is consolidation across the AI market in

40:47

general and support specifically

40:50

companies really are saying if I'm purchasing an AI-based solution, I wanted to

40:58

provide,

40:58

and this goes to Eliza's question as well, I wanted to provide as much value as

41:01

possible,

41:02

why do I have to deal with a whole bunch of them? And that's what really

41:05

support what

41:06

logic is doing is enabling the whole thing under one roof. And yeah, going back

41:14

to my point earlier,

41:16

a lot of companies today are thinking of it as a builder by the whole AI, gen

41:23

AI

41:23

idea or the trying to build possibly their own internal solutions, assuming

41:32

that they know best

41:33

their knowledge, which is true on the one hand. But then on the other hand,

41:40

really developing

41:42

a robust solution that can really provide value. I hope one of the things that

41:49

came across

41:50

in this webinar is that it's not as simple as you would think, especially not

41:53

what we're doing with

41:54

the level of complexity of many of these companies and their data. So it could

42:01

take a whole lot of

42:02

time and then still not provide the results. This really requires knowing what

42:06

you're doing on a

42:08

very deep level, especially on the search engine side, but not only in order to

42:13

build this out correctly.

42:15

Yeah, awesome take. Folks, thanks for joining us. We've just got one more slide

42:22

. In October,

42:24

we're going to be holding our first SX Live in person conference event. It's

42:30

very exciting.

42:31

We've got speakers from around the industry coming in to talk about AI, to talk

42:36

about everything

42:36

going on in support. And there's some real benefits to coming into San Jose and

42:41

attending this conference.

42:42

You're going to hear how folks are using real world solutions to improve the

42:46

customer experience.

42:48

You're going to be able to connect with your peers, industry leaders, folks

42:51

that are really

42:51

passionate about support. You're going to get the chance to get inspired by

42:55

these new ideas,

42:56

and you're going to grow from this kind of immersive experience. Day one is a

43:01

training day,

43:01

October 7th, and day two, October 8th is a full day of a single track of

43:08

presentations and

43:09

fireside chats. And so I invite you to jump on SX Live.com, reach out to your

43:14

customer success or your

43:16

account executive and join us in San Jose. Sorry, Alana. Thanks so much for

43:21

joining me today.

43:22

Folks, I think we've answered all of your questions. We'll have a recording of

43:27

this up

43:28

today, and we'll email it out to all of you. I can see here in the chat. It

43:32

looks like it's just

43:33

kudos all around. So thanks everyone. Have a great week.