Joe Andrews & Karan Sood 54 min

Ep 8: Real-world Generative AI for Complex Support


According to BCG, one-third of executives have already increased AI investments in response to the emergence of GenAI. Even with that investment, user skill gaps and organizational data complexity slow GenAI adoption and value realization. To be efficient, today’s support teams need generative AI solutions built to help them solve complex problems. They also need enterprise-grade solutions that plug into their existing tech stacks, which are available today. This webinar addresses how complex enterprises are approaching GenAI to quickly realize value. Joe Andrews and Karan Sood discuss the top challenges and questions on the minds of organizations considering AI: - What is the maturity state of LLMs today? - How to choose between Public vs. Private LLMs - What are the GenAI use cases enabled by SupportLogic currently? - Why deploy today when the technology is continuously improving?



0:00

Welcome. Good morning. Good afternoon. Good evening.

0:02

Wherever you are joining us from today, my name is Joe Andrews,

0:06

Chief Marketing Officer at Support Logic.

0:09

And I'm joined by my esteemed colleague,

0:12

Caron Sude, Chief Product Officer. Welcome, Caron.

0:15

Thanks, Joe. Thanks for inviting me.

0:17

So glad to have you here. We are talking today about a really

0:22

important topic, generative AI solutions for complex support.

0:29

And we are talking about how companies are getting real value

0:33

from that today. And I'm really excited about this conversation.

0:37

You and I have spent a lot of time talking about building,

0:42

delivering solutions. And so we have some great insights to share.

0:47

I think collectively, you and I have spoken with, you know,

0:50

hundreds and hundreds of customers, companies in the market,

0:54

people who are facing some of the challenges in looking at generative

0:58

AI and the broader set of AI solutions today to help them out.

1:03

So before we get started, I just want to go over a couple of

1:07

administrative things. You'll, if this is your first time joining

1:10

a support logic webinar event using the Goldcast platform,

1:16

you'll notice the controls on the right. We have a chat dialogue

1:20

going. We have Q&A. So anytime during the webinar,

1:26

please enter your questions. We will get to them if it makes

1:29

sense at the time or at the end. And then we have some docs

1:34

available, some resources in the docs section that you can

1:38

check out. So we will come back to that and we will do a

1:41

couple of polls as well during the event to fit the conversation.

1:44

So why don't we kick off? Starting with a little fun word

1:48

cloud. There are a lot of terms out there being thrown about.

1:53

These are some of the most popular at the moment. And two of

1:58

them that really pop into focus are Gen AI and LLM. And we're

2:03

going to spend a lot of time focused on those things today.

2:07

Want to start with, I think everyone here is familiar with

2:11

the Gartner hype cycle. We are at the peak right now for

2:18

generative AI on the hype cycle. Just as a background,

2:23

Gartner plots new technologies across the cycle and there's

2:27

sort of a hype that happens where you see the peak and then you

2:31

enter after that this trough of disillusionment when companies

2:35

sort of have skepticism and are facing the realities of these

2:39

new technologies before you enter this period of normalcy,

2:42

normalcy they call the plateau. And it's really been a year,

2:47

maybe a year and a quarter since chat GPT 3.5 came out. And

2:52

just a year ago, we were having conversations and people, it

2:56

was brand new. People were sort of thinking about it. Not really

3:00

sure what to make of it. We were all sort of testing it in our

3:03

personal lives. But the question of how does this apply to

3:07

business and how does this apply to enterprise business? And in

3:11

our case, the world of post sales customer experience, those

3:16

questions all have been surfaced in the last year. And we're

3:20

making some great headway. We're going to dive into that today.

3:22

But it is hype. It currently is in a hype cycle.

3:27

And so I think we have all seen hype cycles for the different

3:31

technology projects in the past years. And there's always,

3:35

you know, a day of reckoning, so to say, right? However, I feel

3:38

the technology provides such a step function improvement in

3:42

making the enterprise work close so much better. And that's, you

3:46

know, a contributing factor to the growing popularity and also

3:50

the inflated expectations, I would say.

3:51

Yeah, well said. And to compliment that hype stage that

3:59

we're in, I think it's fair to say that the data shows clearly

4:04

the opportunity and impact will be incredibly large. Just a

4:08

couple of stats here. You know, this first one from BCG, if you

4:13

think about all of the hours per year, worldwide, 14

4:18

billion that customers spend contacting service. I tried to

4:22

imagine what that's like. It's like taking every person in the

4:25

United States and having them spend a full work week entirely

4:30

on the phone with with service or contacting them.

4:34

Tremendous amount of surface area, so to speak, to to attack with

4:39

solutions that help that problem set. You see the massive

4:43

trillions contribution to the global economy. There's always a

4:48

conversation around or concern raised around, Hey, will jobs

4:53

go away? We know with any disruptive technology, jobs do go

4:58

away. But many more jobs in this case are believed to will be

5:02

created as a result. So it's about shifting the skill sets and

5:06

we'll come back to that skill set. And then there's a strong

5:09

correlation with CX leaders met, you know, many of you on the

5:13

session today are in this space. Believe that there's a

5:19

correlation to success in post sales customer experience with

5:23

AI solutions.

5:24

I agree with everything that you said, do and I think, you

5:28

know, I know, you know, AI replacing jobs is is a sensitive

5:33

topic. But a statement, you know, from a Harvard professor that

5:37

one stuck with me is AI won't replace humans, but humans with

5:42

AI will replace humans without AI. That's such a profound thought

5:47

actually. And I think that's going to be a theme of, you know,

5:50

what we're going to be talking about today, the impact that

5:54

this technology can actually make in the enterprise context in the

5:58

enterprise workflows is going to be, I think, completely

6:02

transformational.

6:02

Totally agree.

6:05

One more thing to share is that we've seen in just the last six

6:12

months, that the perception of JNI has dramatically shifted

6:17

among the C suite. And we're seeing now what we're calling this

6:22

tailwind of demand, where companies are, you know, looking

6:28

to deploy AI and generative AI specifically in the enterprise

6:33

and looking for where are the best opportunities to do that. And

6:37

just fall of last year, there was a big concern from the C suite

6:44

about adoption because they didn't fully understand it, they

6:48

didn't understand the implications. And they were not

6:50

supporting its rollout. And today they are the key supporters,

6:54

right? Two and three are saying it's going to be the most

6:57

disruptive technology in the next five years. And fully a third

7:01

are saying they're already increasing their investments

7:04

due to the emergence. So that's a big shift. Now, it's still

7:09

perception, right? Reality has a long tailwind to go. I know,

7:14

you know, I come from the cloud infrastructure space previously,

7:17

we talked about cloud, you know, 20 years ago or 15 years ago,

7:20

and it's taken a long time to get to cloud. So this will be a

7:25

long journey. I think, you know, someone famously said that

7:31

we in business, we tend to overestimate the short term

7:36

benefits or changes of a new technology and we underestimate

7:40

the long term. So I think that's definitely clear here as well.

7:44

So, Karin, you know, first sort of basic setup question is in,

7:51

I'm sure a lot of people here are wondering this or

7:55

maybe have different points of view, but how should we think

7:58

about generative AI compared to large language models?

8:02

LMS. Love it. Love that we are starting with a very basic

8:06

still. And I'll keep it simple to begin with because we're going

8:10

to go into the details, you know, as we progress in the

8:13

webinar. So generative AI is, for me, a subset of the broader

8:20

AI technology that excels in content generation abilities.

8:26

And that content could be text, it could be images, videos,

8:30

code, music. LLMs are a further subset of generative AI that

8:39

essentially use deep learning algorithms and are tailored to

8:43

perform language related tasks such as X generation language

8:49

translation, natural language understanding and text

8:53

analysis. So I would say, yeah, generative AI being a subset

8:57

of the broader AI and LMS being a further subset of gen AI,

9:02

tailored for performing language related tasks.

9:05

Great. I think that's a perfect setup. And we're going to be

9:11

talking about both interchangeably, but it's helpful to

9:14

understand how they fit together as a starting point. So let's

9:18

go deeper and deeper. Another question that people will have

9:23

about is about the maturity of LLMs. And this is a little bit

9:27

of an eye chart, but I thought this was useful context to show

9:32

just how quickly that curve has risen to the upper right in the

9:37

last few years of computational evolution around the LLMs.

9:43

And so curious to get your thoughts here, Karan.

9:46

Yeah, I mean, there's no doubt Joe that the technology has

9:50

evolved very rapidly. I would say in the last six to 12 months,

9:54

every new week, a new foundational model is being

9:57

announced. You know, the joke was every time OpenAI has a demo

10:02

day, they kill a few hundred startups or reset a few thousand

10:07

roadmaps at least, right? Also another measure of maturity,

10:11

at least from my perspective is GPT 3.5. That was announced, I

10:17

think sometime in November of 2023. It had SAT score of 1260.

10:23

GPT 4.0, which was announced just six months later, had a

10:28

score of 1410. That's a 150 point increment in just over six

10:32

months. And of course, GPT 4.0 had so many more capabilities

10:37

for example, it became multi model. So I would say on the

10:40

technology side, it actually has been evolving very, very fast.

10:43

The second element of this is actually also, as you pointed

10:46

out earlier on the adoption side, now almost all companies that

10:50

we speak to are, I would say, exploring usage of large language

10:57

models in some ways. 2023 in some ways was a year of pilot

11:01

projects for a lot of companies. And 2024 is the year when a lot

11:06

of the same companies are now rolling out their first

11:09

projects in a production cycle kind of a setting, right? So I

11:14

would say definitely the technology is maturing, as well as

11:18

the adoption of the technologies is maturing as well. And on the

11:22

adoption side, I would say in my opinion, one of the reasons

11:29

why that has happened is because of the growing awareness and

11:32

the understanding of the different levels of sophistication

11:36

when it comes to implementing a large language model. You can

11:40

start with the crawl phase of just creating a wrapper around

11:43

the foundational LLM to let's say the next step being, you know,

11:47

influencing the output of the model using prompt engineering or

11:52

techniques like RAG to the advanced stage, for example,

11:57

being completely building, fine tuning and running your own

12:02

model, right? So I think a lot of these different elements from

12:08

an education and awareness perspective have also eased the

12:13

adoption because people have understood that we can crawl,

12:16

walk and run and don't go to the final stage at the very

12:20

beginning.

12:21

That's a profound point I think you just made, which is there

12:25

are really two vectors to this. It's not just the

12:27

computational power in the advancements of the technology,

12:30

technological capabilities themselves, but it's also addressing

12:34

the ease of use, right? And you can't have one without the

12:39

other to drive success.

12:41

Exactly.

12:43

Another question that often comes up is public versus private

12:49

LLMs, right? We're all familiar personally with, you know,

12:54

any of the public LLMs where we're, you know, performing content

12:59

generation tasks and asks and it's pulling from the large public

13:03

corpus. The enterprise side has, you know, some concerns, which

13:09

we'll get to a little later. But let's talk about how we should

13:12

think about them differently. And also how to think about that

13:17

crawl, walk, run that you just mentioned.

13:18

Exactly. So, so as I was saying, I think there are, you know,

13:24

different levels of sophistication when it comes to how you

13:27

could implement large language models. Now, there are, you know,

13:31

maybe three to five different ways, but let's say to keep it

13:34

simple, the beginner mode, for me, is basically just a wrapper

13:39

around a foundational LLM model. Now, that may, in certain

13:43

enterprise use cases be enough. But at least from what we found

13:49

out, talking to a lot of our customers, just a basic

13:52

foundational model that is just trained on any public data,

13:57

is never going to be sufficient. So the next level up from

13:59

there, let's call it, you know, intermediate mode is when you

14:05

provide certain instructions and context through something that

14:10

we call as prompt engineering. So you have a better control over

14:14

the model output. And this could be in terms of the formatting,

14:18

it could be in terms of the style. And when we'll talk about a

14:21

couple of use cases, for example, case summarization for the

14:23

model, and our, in our presentation, we will go deeper on

14:27

this. But let's say you have the foundational model, the next

14:30

step up from there is, you know, you use prompt engineering to

14:33

influence the output of the model. The next level up from

14:37

there could be there are techniques that you can use. For

14:40

example, one of the most common ones is the trivial augmented

14:44

generation rag, where you could add your knowledge and data

14:49

context to the model. So you can use your own proper iterative

14:52

data and train the model so that the output is restricted to the

14:56

context that you've provided to the model. Right? That's the next

15:00

level of sophistication. And then I would say the level up from

15:02

that, let's call it the advanced mode for me is when you

15:06

essentially build fine tune, run, and probably even orchestrate

15:11

a series of your own custom private LLM models. So these are

15:16

for me at a very high level, the different levels of

15:19

sophistication. And for the different use cases, one or the

15:23

other may have to be chosen. And of course, there are advantages

15:26

and disadvantages with each of the approaches.

15:28

Thank you, Kern. So can we walk through the trade offs a little

15:33

bit on each side? I know they're bullets here kind of highlighting

15:37

it. But how should I think about it? Maybe if you start with

15:39

phase one? Yeah, yeah. Yeah, I would say, you know, I'm in

15:46

there are there are a few different vectors. There is, for

15:49

example, customization and flexibility, there was

15:51

scalability and performance. Also in certain regulated

15:55

industries, you know, privacy and security is more important than

16:00

certain other industries. And then of course, on the other end

16:03

of the spectrum, there is, you know, there is always cost to

16:05

development, there is the cost to integration. So from my

16:09

perspective, I would say,

16:13

things that are the factors that kind of favor a private LLM,

16:19

for example, are when you need extreme customization and

16:24

flexibility. And this is because public LLMs kind of come with

16:28

predefined architecture and parameters, limiting the

16:31

customization options. And in contrast, the private LLMs offer

16:37

the organizations the flexibility to tailor the models

16:40

according to their specific needs in cooperating even the main

16:43

specific knowledge and fine tuning the parameters, specifically

16:48

for a particularly use case. So I would say for the use cases

16:52

and organizations where customization and flexibility is

16:55

the most important thing, you know, maybe phase three or phase

16:59

four is the way to go. Same with scalability and performance, I

17:03

would say private LLMs, you know, can be optimized for scalability

17:09

that can be tailored to the specific workload requirements of

17:12

the use case. So again, if scalability and performance for a

17:17

particular use case or for a particular organization is the

17:20

key, maybe phase three and four are kind of more important. On

17:25

the other end, you know, with how fast the technology is evolving,

17:29

you know, every every new week, a new foundational model being

17:34

announced, developing and maintaining private LLMs can be

17:39

actually very resource intensive, requiring significant

17:42

investment in research in infrastructure and talent. So I

17:47

would say it's not a one size for tall, you do not always need

17:51

phase three and phase four for all use cases, maybe in certain

17:55

cases, just a foundational model, a wrapper around it, the prompt

17:59

engineering, which is phase one and phase two, probably is going

18:03

to be good enough. So I think in the end, it all depends on,

18:06

you know, for the different considerations that you may have

18:10

depending on the use case, and the use case could also be

18:14

specific to each customer, you may have to choose whether you

18:18

use, you know, a public SLM or public LLM with, I would say a

18:25

little bit of prompt engineering or most likely more advanced,

18:29

which is then going towards the direction of fine tuning and

18:32

private LLMs.

18:33

Thank you for unpacking that, Karan. So I would just want to

18:37

say for everyone who's joining us, there's complexity and we love

18:43

having these conversations with, you know, folks like yourselves

18:47

who are joining about considering different paths. And as you've

18:51

heard, it's very nuanced, depending on your own situation.

18:54

So we would love to have follow up conversations with you about

18:57

where you are and what are you trying to achieve and figure out

19:00

the best path to do that. It's a great segue into support

19:05

logic's approach and how we differentiate with our own

19:08

generative AI solutions. So, you know, there's kind of four key

19:12

areas around this infrastructure and security on sort of the

19:17

outer perimeter and then domain deep support and post sales

19:21

CX domain expertise and then our ability to fine tune the

19:25

models. But, you know, let's dive into that a little bit, Karan.

19:28

Absolutely. I've been first and foremost, since there is, as we

19:32

discussed, there is no, like, one large language model that

19:37

will solve for all enterprise use cases. So what we have done

19:41

as both logic is we've invested passively in building, I would

19:46

say a robust LLM infrastructure that combines public LLMs,

19:53

private small language models and private large language

19:57

models. And we can seamlessly switch between several public

20:03

LLMs and soon to be announced private LLMs as well for the

20:06

different use cases. So I would say first, you know, the way

20:10

we differentiate is because of the infrastructure that we've

20:13

created around the technology piece itself. Secondly, I would

20:18

say we have tons of domain expertise within support logic,

20:22

both on the technology side, as well as on the support domain

20:27

side. And this is extremely important, in my opinion. As we

20:31

said, as new foundational models and domain specific models

20:34

are announced every week, we have a dedicated team of ML

20:38

engineers that are continuously evaluating them. And if someone

20:42

wants to build a private LLM from scratch, you will need a

20:47

dedicated team of ML engineers to research them, to test them,

20:50

to build them to train them to deploy them. And that requires a

20:54

lot of deep expertise and ML resources. And on on the domain

21:02

expertise side as well, you know, we take the burden off you,

21:06

for example, we were talking about prompt engineering, it's

21:11

extremely important to pass the right context and instructions

21:16

to a foundational LLM model to get the desired output. The best

21:21

example that I have for this one is, you know, one of the use

21:24

cases that we do is case summarization. Now, there is

21:27

never going to be one case summarization, which is going to

21:30

be, you know, best for every user persona. Think about if it

21:35

is a case summarization for a handoff from one agent to the

21:39

other, the focus has to be on the most recent events on the case,

21:45

what are the immediate next steps? Whereas if the user persona

21:49

is a support manager, then the case somebody has to focus more

21:53

on what has happened on the case from the time it actually was

21:57

logged, who are the people that were involved. And for a

22:01

support manager, recommendations on what the swarming team should

22:05

look like. Whereas if the case summarization is for an

22:10

executive, maybe there, you know, the focus is a little bit

22:13

more on what's the overall sentiment on the case and at an

22:18

account level. Now, this requires different kind of prompt

22:23

engineering that could be needed for the different user

22:25

personas. And that's something that support logic has been

22:28

doing for many, many years. All of us are, you know, domain

22:31

experts in the support domain. So I would say the second level

22:35

of differentiation is because of the domain expertise that we

22:39

bring to the table. Third, I would say, as you go up the level

22:43

of sophistication in terms of the implementation, we are also

22:47

able to use techniques like retrieval augmented generation

22:50

rag to train models on customers proprietary data. And we can

22:55

also actually fine tune the deeper layers of the models by

23:01

further training them on smaller and specific data sets that we

23:05

can get from customers. So I would say leveling it up, even

23:08

the fine tuning is something that we can help do for our

23:11

customers. And of course, last but not least, we take security

23:16

very, very seriously for all public LLMs, we use enterprise

23:21

level safeguards to protect the data from being stored in

23:25

subprocessors, something that we've heard from our customers

23:27

and prospects. And on top of that, we've also built an

23:31

additional layer of reduction, which kind of removes all the

23:34

personal information from the data that is being stored. So I

23:38

would say across these four different vectors, there are

23:41

certain things that we do that kind of take the burden of our

23:45

customers, and they can rely on our expertise. And the fact

23:50

that we have dedicated engineers who do this on a full time, full

23:55

day basis.

23:55

I would just add to that, if you look at our customers, and we'll

24:00

show some later on, many of them are in the complex

24:04

infrastructure and or security space, or in the data and

24:11

application space where all of these requirements are

24:14

critical. And it's it's because of the domain experience and our

24:19

ability to do something for them that is not within their core

24:23

competency that that they've decided to partner with us. So we're

24:27

very proud of that. And, you know, any concerns that you may

24:31

have around any of these vectors, we're happy to dive into at a

24:35

deeper level. So let's pivot a little bit and talk about the

24:39

use cases. We have, you know, we, as I said, we have a number of

24:45

customers that are doing a whole bunch of amazing things. We

24:50

also talk with, you know, hundreds and hundreds of companies

24:53

out there in the market. And this is what we hear. They are

24:57

prioritizing. And we would also love to hear, you know, get your

25:02

opinion, we have a poll that's opened up. So we have kind of

25:05

three basic areas, the agent productivity side of it, the

25:10

core support operations efficiency, and then quality

25:14

monitoring and coaching and we'll dive into a little bit of

25:17

detail of those. But if you go to the poll, tell us what is top

25:21

of mind for your priorities. We'll give that a few minutes. While

25:28

people are doing that, we can, we can dive in a little bit. So

25:32

Karen, if you would, let's talk a little bit at first about

25:35

agent productivity. I think this is the one that, you know,

25:39

companies we speak with first talk about, because that idea of

25:45

using generative AI to create a response to summarize the case

25:51

to, you know, deliver guidance on next best actions, etc. Let's

25:56

talk a little bit about that and how the gen AI technologies are

25:59

able to help with that. Absolutely. And if I may take even a

26:04

step back, so I would say, you know, we at support logic have

26:07

been doing AI from the time we started, especially predictive

26:11

AI. We actually even started using technologies like like

26:16

BERT, I would say way back when it was launched in 2018, 2019. And

26:21

this was, you know, one of the first LLM's way before, you

26:27

know, the hype that we've seen in the last six or 12 months, and

26:30

at least at support logic, we fundamentally believe that

26:33

combining predictive AI with generative AI is the real game

26:39

changer. So what you see on the slide from Joe is, of course,

26:44

we will, we will, we will kind of dig deeper into the gen AI use

26:47

cases. But all the other things that we do with predictive AI

26:50

combine with gen AI for us is the real game changer.

26:55

Thank you for starting there. By the way, just to guide us. So we

26:59

just closed the poll. About 50% have said their top area of

27:05

priorities in this core support operations efficiency, 40%

27:10

agent productivity, 10% quality monitoring coaching. So we'll

27:14

love that. cater our content accordingly. Yeah, yeah. And I

27:19

would say, you know, when it comes to the gen AI use cases, I

27:22

personally like to use some kind of a taxonomic categories to kind

27:28

of group even the different use cases, right? A few examples of

27:32

these categories could be, you know, Jenny, I can help you

27:35

summarize content. Jenny, I can help you transform content.

27:40

Jenny, I can help you create new content, or it can help you

27:45

retrieve content. So I think these are the broader categories.

27:48

And within each of these categories, we have different use

27:53

cases across the breadth of our product portfolio as as Joey

27:57

were talking about. Now, for example, under the content

28:01

summarization, we have use cases like case summarization and

28:06

account summarization. And again, as I was mentioning, this

28:10

could even be broken down into case summarization based on a

28:14

specific user persona case summarization for an agent kind

28:19

of going through the handover from one agent to the other,

28:22

being one of the use cases versus case summarization for a

28:25

support manager versus case summarization for an executive.

28:29

On the content generation side, we have use cases like something

28:35

that we we plan to build in the second half of this year is

28:39

knowledge article generation using Jenny I for content

28:44

transformation. We have use cases like translation assist

28:48

translating from one language to the other. Most of our customers

28:52

are global customers that deal with cases coming to them in

28:55

different languages. And even before we can do the sentiment

29:00

detection on the case, the case may have to be translated for

29:04

the agent for the support organization, let's say into

29:07

English. So I would say on the content transformation side

29:12

translation, it's just as one of the use cases that we support

29:15

today. The other one is response assist, where let's say after

29:21

the case has been summarized, also helping an agent or maybe

29:26

even a support manager compose a response that could be sent out

29:31

either internally or could be sent out directly to the

29:33

customer as well. And then on the content retrieval side, we have

29:36

use cases like troubleshoot assist, and something that we're

29:40

currently working on is natural language powered analytics.

29:44

Where for example, you can query our platform based on natural

29:48

language instead of prebuilt charts coming from us to you. You

29:55

could on the fly in real time create your own chats using

29:59

natural language. So these are, I would say some of the examples.

30:02

I'm just going to expand it to some of some of the new

30:05

capabilities coming as you're referring them. Yeah, absolutely.

30:10

Anything to talk about on the core support operations efficiency

30:16

side, you know, just we tend to think of that as sort of more of

30:20

the summarization retrieval, helping to guide workflow and

30:27

providing, you know, the analytics that help the entire

30:30

operation get more efficient. Exactly. And I think one of the

30:34

other use cases that we, we are currently working on in our

30:39

in our current releases, besides the the case summarization,

30:46

also summarizing based on all the post sales interactions that we

30:50

get into our platform, summarizing at an account or a

30:54

customer level. And not just summarizing, actually, it could

30:58

also be providing insights about, you know, like for a case, there

31:03

is a likelihood to escalate. There could also be a likelihood

31:07

to turn on the account side. So some of the things that we are

31:10

also planning to build in the second half of the year are kind

31:14

of moving or kind of adding to our case centric view to kind of

31:20

also becoming more customer in account centric and providing

31:24

count, churn insights or account summarization and stuff like

31:28

that as well.

31:29

Fantastic. And for the people who joined us who are interested

31:34

in quality monitoring and coaching, just a brief mention of

31:37

that, I think there are some powerful use cases here being

31:40

able to automatically cue all customer interactions right

31:44

against a rubric and against compliance standards based, you

31:49

know, in some industries, that's very important. Being able to

31:52

sort of surface back coaching insights and analytics that help

31:57

not only your frontline agents, but the entire org get better.

32:01

We've seen some powerful results here. So if that's

32:04

something you're interested in, or if you have questions about

32:07

that, you know, feel free to add that into the Q&A.

32:10

And I would say just to add to that one, I would say a lot of

32:13

our customers are looking to move to move from, you know, a

32:18

very manual QA process to an automated process. And I would

32:22

say through JNI, you could completely reimagine, you know,

32:27

for example, the automatic QA process as well, where the score

32:33

cards and the automatic grading could happen based on

32:37

or leveraging the JNI technology. So I would say immense

32:42

opportunities to kind of completely transform the auto QA

32:46

process using JNI as a technology.

32:48

Fantastic. All right, let's move ahead. One of the other

32:54

topics that folks are interested in covering are the

32:58

challenges, right? What is getting in the way of enterprise

33:03

AI and LLM adoption? And let's go a little deep on these and

33:07

talk about what they are and how to address them.

33:09

Yeah. So Joe, I think even before the five categories that I

33:14

mentioned here, I unlocked the blog that you had there, you

33:17

know, you were talking about, you know, it's for any

33:21

technology, not even JNI, it's important to define the

33:24

problem first. JNI could be a solution to some problems. But

33:30

it should not be the end goal. In fact, misusing JNI

33:34

diminishes the value of AI in an organization in my opinion. So

33:38

so getting the problem first is the most important thing. But

33:43

specifically speaking about just sorry, Karl, just quickly

33:47

mentioned the poll is open. We want to hear your opinion in the

33:51

audience. What are the top challenges you're facing today?

33:54

Pick up to two.

33:55

All right, so specifically speaking about the JNI challenges

34:01

and the pitfalls from a technology perspective. Personally, I

34:05

would say, according to me, hallucination is one of the

34:10

biggest ones that I've heard from our customers. One of the

34:14

challenges with large language models is that they have a

34:18

nancy to confidently bullshit or hallucinate stuff, right? I

34:23

mean, hallucinations are essentially output that while

34:26

appearing possible, are not based on factual animation. And

34:31

this becomes particularly significant in an enterprise

34:36

context where accuracy and fact based decision making is

34:40

paramount. And the principal challenge arises from the fact

34:44

that these models, especially foundational public large

34:50

language models, they are trained on massive data sets that

34:55

has an advantage because it kind of gives them the breadth. But

34:58

they're often derived from the internet or extensor, you know,

35:03

text, corpora, and if the sole state I encapsulate certain

35:07

bias, the model will learn and possibly even magnify that

35:13

bias, right? So I would say hallucination for me is one of

35:17

the biggest challenges when adopting a technology like this.

35:22

Of course, there are ways around it. For example, as we were

35:25

talking about using technologies like rag fine tuning, the large

35:31

language model are exactly to counter problems like

35:36

hallucination. So I would say hallucination for me is one.

35:39

Data complexity is the other one key to successful fine

35:44

tuning lies in the preparation and the pre processing of the

35:51

domain specific data sets, which is critical to ensure, you

35:55

know, compatibility with the model and with the task at hand,

35:58

whatever is the use case. And this is where as we're saying at

36:02

support logic, you know, we have the expertise to be able to

36:08

deal with domain specific data to be able to train language

36:13

models based on your proper right tree data so that we can,

36:17

you know, make the complexity of data a little bit easier for

36:21

our customers. Third, I would say is till in several

36:26

organization, not just in the regulated industries, but even

36:29

generally across many customers, data privacy and security

36:33

concerns are still all valid foundational element models are

36:37

trained on massive amounts of public data. And, you know, for

36:41

a large language model to become helpful in the enterprise

36:44

context, they need to be retrained on proper right free data

36:48

that may contain some sensitive personal data, financial

36:52

information, or or any confidential stuff, right? So to

36:56

mitigate these concerns, enterprises must ensure that their

37:00

data is adequately secured. There is a reduction if needed is

37:07

happening on all of that data. So I would say data privacy and

37:11

security is definitely one of the other things. Skill gap, we

37:15

talked about it in one of the earlier slides as well on the

37:17

technology side, new models are being announced every week.

37:21

It's extremely difficult to keep pace with it unless you have a

37:25

dedicated team of machine learning engineers and deciding

37:29

which models are best suited for which use cases. Yeah. And on

37:33

top of that, you know, you, you know, also have the main

37:36

specific models. And that expertise basically is also not

37:44

very easy. And that's still gap, I would say is one of the

37:47

other challenges that I've seen where just using a foundational

37:51

large language model, but not having then the expertise to find

37:54

you know, not having the domain expertise does not give you the

37:58

output that you were expecting for a particular use case. Now,

38:02

this is not the problem with the technology. This is that you

38:06

don't have the right skill sets that you need to apply on top of

38:09

the technology to get the best out of it. Right. So I would say

38:12

those are some of the challenges that I can think of. It would be

38:16

also interested to see, you know, on the voting, which ones

38:20

scored. Yeah. So those just came in. Thanks, Karen. So the top

38:23

vote getters were data privacy and security concerns, probably

38:30

about 25% that is top two. And then integration with existing

38:38

systems and workflows. I think this is

38:40

Yeah, I can I can talk about this one as well a little bit. I

38:44

would say, you know, like any other technology, Jenny, I is a

38:48

solution and not the end goal. So before diving into the AI

38:52

implementation, it's it's critical to first define the

38:56

business problem that you're trying to solve for. And when we

39:00

think of Janae use cases at support logic, we take a very

39:03

user persona and a jobs to be done based approach. And we think

39:08

about the end to end workflow that way, you know, whether it's

39:11

predictive AI, whether it's generative AI is not really an

39:15

afterthought. But it is embedded into the core workflow itself.

39:19

For example, we dog food, our Janae with our proper right

39:23

free predictive AI to get more out of generative AI. So I would

39:27

say integration of Janae into the end to end workflow is extremely

39:33

important as well.

39:34

Gotcha. Just want to state there's some questions coming in.

39:39

Avi, you've got a couple in there, we're going to come back to

39:41

those really soon. We have about five more minutes of content.

39:44

And then we want to get to Q&A. So for others, if you have a

39:47

question, please enter it in the Q&A. And let's move along. This

39:53

is an important topic, though, is addressing these challenges

39:55

and just a couple of stats from a survey from IBM in terms of,

40:01

you know, the propensity for the skills gap and the data

40:05

complexity to be issues. It's pretty significant among

40:08

concerns among IT leaders who are faced with deploying these

40:12

solutions. Another question that comes up is, why deploy now?

40:19

Right. We looked at this exponential curve of technology

40:23

and computational improvements. You know, why not wait? Why should

40:27

we deploy now?

40:28

Yeah, that's that's one of my favorite ones. You know, I have

40:34

you, Jenny, I as a fundamental shift in how we will deliver

40:39

interfaces. This technology provides such a major step

40:44

improvement in interface, making the enterprise workflows so much

40:49

better compared to what it was without the technology, right?

40:55

So it's on the same scale as if not more, I would say, worldwide

41:00

web three, three decades back or mobile phones to decades back,

41:04

just like it was never a good advice to not go online and

41:09

leverage the internet. In fact, the companies that aggressively

41:13

embrace that change were the ones that actually succeeded. The

41:17

same way I would say it does not help to sit on the sidelines

41:21

with this technology and wait for it to see how it plays out,

41:26

right? Earlier doctors will likely will be in a leadership

41:30

position as well. And the earlier you get involved, the more

41:33

you will learn and the more it will put you in a better

41:36

position, as the technology evolves and develops.

41:39

I would add one thing, Karan, so with previous technologies,

41:48

there was, you know, a major forklift sort of transformation

41:54

upgrade. There's a lot of the, you know, infrastructure and

41:58

foundation that needs to be built and enterprises could not

42:01

undertake these changes lightly, whether it was moving from

42:04

on-prem deployments of applications to the cloud, etc.

42:08

In this case, think of it as a sidecar. There are, you know,

42:14

abilities to take advantage and deploy some of these use cases

42:18

very quickly, like in, you know, a month and a half, 45 days to

42:23

deploy some of these use cases. And it's not negatively impacting

42:28

your core operations. And, you know, partners like support

42:32

logic would be doing the heavy lifting. And then you're able

42:36

to get immediate results that help you inform the rest of your

42:40

roadmap and in your plans. So it's sort of a parallel effort

42:44

versus I need to make one big decision. And then it's going to

42:48

be a major move.

42:50

Wonderful said, 100% agree.

42:53

Okay. So I want to come back to sort of the benefits. We've

43:00

talked a lot about the use cases, the challenges, because we

43:04

want to be very transparent and face those upfront. But it's

43:09

important to consider that whether you're looking at predictive

43:13

AI solutions, which have been around from support logic for a

43:16

lot longer or newer generative AI solutions, these benefits are

43:21

very compelling across the organization, right, whether it's

43:24

the core support team operations management, you know,

43:30

frontline agents, executives on the left, customer success, even

43:35

the GTM functions, you know, top center or product and

43:39

engineering, IT BizOps, CXOs, finance. There are benefits that

43:44

accrue company wide. And one of the questions we always get is,

43:49

you know, which one should I prioritize? If I'm trying to make

43:53

the business case, if I'm a support executive or customer

43:57

success executive, how do I compete with corporate IT looking

44:01

at say 14 different AI projects? And so this is one of the best

44:07

ones that delivers value for post CX. And we see that I'm just

44:13

going to show a few stats here across these vectors from

44:18

customer experience, right, in terms of reducing escalations,

44:22

reproving CSAT, MPS, customer retention, removing friction,

44:27

improving experience, and then operational efficiency, which is

44:31

all about, you know, better, faster, cheaper. And you see the

44:36

results here, one of the customer customers that we're

44:39

proud of and did a case study webinar a couple weeks ago with

44:44

TSIA is Informatica. They were able to prioritize implementing

44:50

support logic. And this actually augmented a build initiative

44:54

that they had going on for many years. And they brought us in

44:58

because of the deep domain expertise. And they were able to

45:01

very quickly reduce the overall number of cases, which took,

45:06

you know, removed the drag on their internal resources that they

45:10

were able to redeploy to other initiatives in their portfolio

45:14

of projects across post sales CX. So all of these customer case

45:20

studies are on our website. We would be happy to walk through

45:23

them with you, but they're very compelling. So any any final

45:29

points, Karen, I want to show a couple of resources, and then

45:32

we'll address some questions in the time that we have left.

45:35

No, nothing from my side, Joe, we can go to the resources and

45:39

then the questions.

45:40

Fantastic. So one of the one of the things that we're proud of

45:45

at support logic is we have been building over the last couple

45:49

of years, a community we call SX live for support experience.

45:53

And we do a series of events and we have a lot of virtual

45:56

content as like the webinar that we're on now, where you can go

46:01

and learn and you can contribute. So we encourage you to check

46:05

that out as a resource for yourself. We are also doing a

46:09

series of city tours across the US this year. We just completed

46:15

two in Redmond together with Microsoft and in Austin, Texas,

46:20

and you see the upcoming calendar for that. So great way for

46:23

you to network. These are not support logic infomercials.

46:28

These are conversations and panels with, you know, thought

46:33

leaders and practitioners who are sharing their best practices

46:36

and it's a great way for you to learn. And then the other

46:39

thing is if you want to see more in terms of what we've done

46:44

and see the actual product, we have several ways for you to do

46:48

that. If you go to our website, you can request a one on one

46:54

demo. You can join a weekly group demo. We have every Friday.

46:57

And then there's also an upcoming webinar where we're going to go

47:01

deeper on what constitutes enterprise grade AI for the

47:06

category because as we saw, there is concern around privacy

47:10

and security, rightfully so. And we're going to dive into that

47:15

in addition to the other requirements that constitute

47:18

enterprise grade. So with that, I want to open it up for some

47:23

questions. Thanks for in please enter if you haven't had a

47:28

chance. So we're going to get to Avi who had a couple questions

47:32

first. When do you think RAG will be available in our JNai models?

47:38

So Avi, we are working on this and we are hoping for this to be

47:43

available in the next three months. We're already doing some, I

47:48

would say internal pilots at this stage, the use case that we

47:51

are starting with when it comes to RAG is the troubleshoot

47:57

assist use case where we're kind of trying to use this

48:01

technology to help agents find one first troubleshoot the

48:06

problem and find right solutions to the problem that have been

48:10

described in the case. So that's the use case that we are

48:13

starting with. And we're hoping for this to be available at least

48:18

for our better customers in the next three months.

48:23

And there's a follow up as well from Holly on the RAG topic of

48:28

do you have any tips or resources about how to get started?

48:33

We are in the process of, since this is still not a generally

48:36

available feature in our in our platform, we don't have extensive

48:40

documentation as yet, but this is again, something that we are

48:42

working on. And we would absolutely love to share that with

48:46

you as soon as that's going to be available. But yes, this is

48:52

this is exactly in the works.

48:54

Great. And then another question from Avi, how are you

49:00

measuring toxicity and bias in the gen AI sentiment analysis

49:05

use cases?

49:06

So just just to maybe understand, is it more in the more

49:14

classical?

49:15

Sentiment analysis itself, because that probably is not even

49:23

related to gen EI.

49:25

Obviously, if you could clarify in the comments or the Q&A, the

49:31

you know, because sentiment analysis is typically the

49:33

predictive AI technologies where we're mining all the signals,

49:37

the unstructured data, and then organizing it and bringing the

49:41

analytics front and center. So there is, you know, bias and

49:46

toxicity, I would say is another word for a signal, which is,

49:50

you know, a customer is extremely frustrated or upset. And then

49:56

bias is about sort of normalizing the data and bringing forth,

50:00

you know, predictions and assessments, level setting, a

50:05

customer sentiment based on their normal baseline, not the

50:09

entire market.

50:10

Have you got me from Ron?

50:12

Ron?

50:12

No, that's that's absolutely accurate. And maybe Avi, what we

50:17

can also do is if you could please drop in your contact details,

50:21

we can also schedule a separate session with you where we can

50:25

talk about, you know, how for example, to be measured accuracy

50:29

and precision when it comes to detecting signals and the

50:33

scores that we calculate. It's a deeper conversation and we

50:37

would love to have that with you.

50:40

Great. And a follow up. Well, Amer says, when will this be

50:47

should be available? The last I heard was Q1. I think maybe the

50:50

reference is to rag what you were talking about earlier.

50:53

So the right.

50:55

Correct. So the rag, I would say, as we said, it's in the works,

50:59

internal pilots in the next three months, but some of the other

51:03

use cases that do not use rag, for example, case summarization,

51:07

that we have already opened up for our better customers. And we

51:11

have a couple of customers already trying that out and working

51:14

with us to kind of fine tune it for their use cases. So not

51:19

everything is not available. It's only the rag specific use

51:23

cases that is that is planned in the next three months. A lot of

51:28

the other things, translation assist, responses, case

51:32

summarization, these features are already available in the

51:35

platform.

51:36

Fantastic. All right. Last call.

51:42

Any other questions?

51:44

I think Laurie. Yeah. How do you approach or pitch a prospect

51:56

that is not on board with AI? What why don't you take that

52:00

to do and I'll pitch in?

52:01

Yeah. So I think Laurie, we basically followed the framework

52:08

of showing benefits and then understanding someone's

52:16

objectives, right, the priorities, whether that's, you know, in

52:21

the operational side, getting more efficient with existing

52:24

resources, improving the experience directly for end

52:28

customers or for their internal agents. And as we showed kind of

52:33

the three column set of use cases, all of those basically

52:38

support those higher level business benefits in some form

52:41

or another. In understanding where the prospect or customers

52:47

main objectives are, we can then quickly map them to the

52:53

solutions and then talk about here's where the AI technology

52:58

can benefit them specifically. So that's a that's definitely

53:02

a starting point. Obviously, a lot more nuance as you're

53:06

addressing, you know, potentially concerns, the challenges

53:09

that we talked about. But at the end of the day, any technology

53:14

is about solving problems and use cases, you know, better,

53:18

faster, cheaper. And then it's inspiring confidence with those

53:23

who are going to make that change.

53:28

Pretty well said, too. All right. So final one, and then we

53:33

have to wrap up from our OK clarification about the question

53:37

it's about PII data, will support logic be able to detect PII

53:42

and redact the PII? Yes. So with our Q1 release, we have

53:48

already. So if you're an existing customer, you may have

53:51

already seen some release notes, we are introducing

53:53

reduction already in Q1. This is also going to be an ongoing

53:57

investment through the rest of the year. So we will add more

54:02

capabilities. But basic reduction is already available with our

54:04

Q1 release.

54:05

All right, we've reached the end. Fantastic session. Appreciate

54:14

everyone's input on this and Quran, your fantastic business

54:19

partner. And I really enjoyed the conversation. You so much

54:22

knowledge to bring in that really appreciate your time here

54:25

today. Likewise, Joe, it was my pleasure to be here with you.

54:29

Thanks, everyone. Please stay in touch. If you have follow up

54:34

questions, feel free to reach out and we look forward to

54:36

continuing the conversation.

54:38

All right, everyone.