Episode 5: From LLMs to agentic AI — what leaders need to know
Paula Rivera:
Welcome to The AI Factor, the podcast that explores how AI is transforming the way we do business and interact with the world around us. I’m your host, Paula Rivera. And today’s CX Explained episode is all about AI for business leaders, cutting through the hype to get to what really matters. From large language models to objective AI, we’re breaking down the concepts and showing how they apply to real world business scenarios. Joining me is Matthew Caraway, Senior Product Manager of AI applications here at IntelePeer. He’s been on the case in helping businesses make sense of this ever evolving space. Matthew, welcome.
Matthew Caraway:
Thank you. Excited to be here.
Paula Rivera:
Wonderful. Well, before we dive into the heart of the conversation, I’d like to get your insights on a little bit of current affairs, shall we call it. And as we all know, last week Tesla rolled out its robotaxis in Austin. And this is after much anticipation, the robotaxis from Tesla finally hit the road in Austin. While only available to a select few, the program features its Model Y vehicles, and it includes a real life safety operator in each car. Initial reactions were positive, however, there definitely were some speed bumps along the way. Videos have been popping up showing the vehicles entering the wrong lanes, which I’ve done on my own without the help of a robot, dropping passengers off in the middle of the road, speeding, and even driving over curbs. Gosh, I think I’ve done all of those things. I don’t know, I think it’s robot one and human one. Matthew, which would you prefer a self-driving taxi that is an actual car or a self-driving flying taxi?
Matthew Caraway:
Oh, that’s easy for me, it’s a flying taxi. And I’ll tell you what, I want the ability to see the world from a perspective that we don’t get every day. If we just have a self-driving car, that’s honestly not any different than just being a passenger going down the road. But if you’re able to be in a self-driving flying taxi, man, you just get to see the tops of buildings, you get to see nature from perspectives that you don’t. It’s literally a bird’s eye view of the world.
Paula Rivera:
Nice. Well, okay, out of curiosity, have you actually tried out one of these self-driving taxis?
Matthew Caraway:
I have not. I live in the Rocky Mountain West in Boise, and it’s not a strong footprint for Tesla.
Paula Rivera:
No, I don’t believe it is. So let’s dive on into the main part of the conversation. Let’s talk about what AI is and isn’t. It’s been around for a while now, but before we can apply it to any business, it helps to understand what we’re really talking about. So Matthew, let’s start simple. How would you explain AI to a business leader who’s not technical?
Matthew Caraway:
I like to approach it in a couple different lenses. First, I like to talk about why it’s powerful for the workforce and it’s because it allows folks to unlock access to knowledge that they couldn’t have before. For instance, for me, because of generative AI, I’ve been able to teach myself concepts as I have instant access to an expert tutor in any domain. Whether I want to learn about what are large language models, whether I want to learn about how to work on a car or I want to learn about biology, there are these things that you’d have to go to the library, you’d have to go to Wikipedia, and you’d have to read in ways that somebody else wrote it before. But because of the use of GenAI, you can say, “Hey, here’s what I do know and here’s what I don’t know,” and GenAI is going to meet you where you are.
Then the other case that I would tell business users is that it allows you to automate tasks and surface insights that, frankly, were just really difficult for humans to do at scale before. AI can search through vast collections of dirty data, and we’ll be able to teach you things about your customers that your employees would’ve spent months to discover. And honestly, because AI never sleeps, it’s going to be able to work while we are sleeping. You’ll be able to come in the next morning and it’s going to be able to tell you about really important trends surfaced through your customer’s data so that you can take actions today.
Paula Rivera:
I love it, and I think you hit upon a key item, which is talking about at scale, which is AI just enables things to get done and processes to be improved at a rate that just humans aren’t able to achieve.
Matthew Caraway:
That’s right, yeah.
Paula Rivera:
Yep. So what are some of the biggest misconceptions you encounter when people talk about AI?
Matthew Caraway:
I like to think a lot about what’s the right tool for the job. Oftentimes folks see something that’s mysterious to them and they say like, “Oh, a large language model did that.” And it’s like, “Well, slow down. That was an image classification task,” or, “That was a weather prediction task. When we go watch our weather at night, AI is helping the weatherman give you a weather report. There’s tons of advanced models at play there, but they’re not large language models. So I think it’s really important in all forms of business for us to map what is the problem that we’re up against and understand the right solution. Because if somebody incorrectly applies generative AI to a problem, they’re probably going to have poor outcomes and they’re going to be really frustrated, and then they’re going to be a detractor about AI. Whereas if they would’ve used the right tool for the job, they would’ve achieved their outcome.
Paula Rivera:
I really appreciate that. I actually have found that on my own. I mean, my use of AI is certainly not to the scale of yours by any stretch of the imagination. But I am a writer and I do find that some AI models work better for my purposes than others, and so I’ve kind of adopted my two or three favorite ones that I like to use for my writing. I still use other models, but it’s just not for writing purposes.
Matthew Caraway:
Yeah, exactly. Yeah.
Paula Rivera:
Excellent. So can you break down the difference between traditional AI, machine learning and generative AI? I think for a lot of people they hear AI, but whatever comes before it gets confusing.
Matthew Caraway:
That’s right. And frankly, this kind of relates to my prior answer of understanding your solutions. And I’ll tell you that traditional AI, I often say this with quotes, we call it boring ML because it’s been around for decades and it’s really associated with various architectures of simple machine learning or even more complicated ones that we describe as deep learning. And these solutions, they’re really targeted in the problems that they’re solving. They’re provided some form of input and they’re expected to make a prediction. Maybe it’s about attributes of a city, and it needs to be able to make a prediction about how livable is this city or maybe attributes about the atmosphere. And it’s supposed to make predictions about weather patterns. We build these models to predict the price of stocks, to detect fraudulent actors on social media. And tell you what, we even use it to power our feed for recommended songs and podcasts on Spotify.
Whereas on the other hand, generative AI, it has a much more open domain of use, and it’s not intended to predict outcomes or affinities. We use it, literally it’s in the name, it’s to generate new content based on a prompt or some other input such as an image. Think about our ability to go to ChatGPT, to Anthropic Claude, plug in an image, and it’s going to be able to do something because of that image. And frankly, a lot of this beyond the inputs, it also comes down to differences in scale and compute. A machine learning engineer, they could go build and train a price prediction ML model in an afternoon.
Literally, you and I could go pair our screen and I could go show you how to build a model that’ll predict the price of stocks tomorrow. However, on the other hand, to train a large language model, it’s going to require large, massive computers that are the size of warehouses, and they’re going to have to spend weeks and months of continuous processing to build that model, and that time is going to cost millions of dollars in GPU spend. So it’s not only how are these models used, but it’s also about what does it cost to train them and what’s the expertise required to get there?
Paula Rivera:
I love it. And two very quick things, you need to speak with Mack Greene. He was on two or three episodes ago, and he was saying he’s looking for that AI model that can predict stocks, so maybe you could help him create one.
Matthew Caraway:
Yeah. Yeah, that’s what I do in the night, yeah. Fascinating problem.
Paula Rivera:
Excellent. And then you did make mention of Spotify. I love Spotify. I love the service. For the listeners out there who subscribe to Wall Street Journal, just do a search for Spotify in their video section. They have a really great eight minute video that breaks down how Spotify actually works behind the scenes. So it really helps explain what is the AI doing to help serve up recommendations, find the top listened songs and all of that wonderful Spotify jazz, shall we say. So I appreciate you using that as an example.
You were just talking about large language models. Let’s talk a little bit more in depth about them, like GTP-4 or Claude, they’re powering everything from chatbots to content creators, but there’s a lot to understand with how they work and why they are relevant to the customer experience. So in plain English, which you did just tell us, but could you give us a little bit more background into what exactly is a large language model?
Matthew Caraway:
Yeah, I actually think it’s really important when we dive into new domains to simplify things and use that as foundations for our building blocks. So simply put, an LLM, its job literally is just to attempt to predict the most likely next word in a string of words. It uses complex statistical models to determine, “Hey, what words are possible given the preceding words in the sentence fragment? And then of these possible words, which is the one that I should pick?” And there’s a bunch of controls that we can build in our LLM systems that say, “Pick the most likely word, be the most predictable and the most deterministic.” Or we can say, “Randomly pick the next word so that you sound more creative and you’re more artistic.”
And another key point about large language models, this knowledge for next word prediction, it’s really created in two main ways. Number one, it’s going to be different architectures and technical advancements of the models themselves. If we go back to about eight years ago when the famous paper from Google came out that said, Attention is All You Need, and that was a huge advancement over the ways that prior language models were built because they had a different way to keep context or knowledge about, “What is this topic that I’m talking about?” There were sequence models that could maybe finish like a sentence, okay. But as long as you got anywhere beyond a paragraph, it kind of forgot what it was talking about. So went off stray.
Then the second thing is to talk about the vast training data sets that these models have been provided. We can think about the major vendors, Google, OpenAI, Anthropic, xAI, Meta, these vendors, they’ve sourced a ton of public knowledge, whether it’s from books, Wikipedia, social media, anywhere else that public knowledge exists. And then, of course, they’ve got access to a ton of internal knowledge too, whether it’s generated from their own company or maybe partnerships that they’ve established. I like to think about Meta a lot. We’ve got all the different social media platforms, we’ve got messaging apps that are provided through Meta, and so that’s how they’ve been able to train their various models, such as Llama, by looking at the way that people interact on Facebook and Instagram or WhatsApp messaging applications.
Paula Rivera:
Just to make sure I understand it, and I kind of hark back to ChatGPT, generative AI. So generative AI effectively is a large language model.
Matthew Caraway:
It is, yeah. Well, it’s actually sort of the other way around. It’s generative AI is like the superset and generative AI has sub-examples, and large language models are one example of generative AI. But if we go look at models that create songs, models that create images, models that create videos, those are other forms of generative AI.
Paula Rivera:
Ah. Very interesting. Okay. So how are LLMs being used in real customer experience applications today? And I think you do a of this with your day job.
Matthew Caraway:
Yeah, we do. This is the backbone of the innovation that IntelePeer brings to the market. And I’ll tell you that where we sit, we believe that we’re at the very beginning of this revolution. We see that we’re often going to insert AI and agentic AI into the common scenarios that are low value for humans today. If you call your help desk and you need a password reset or you want to know the hours of operations for a business, those are things that a company really needs to offload to an intelligent automated system so that they can prioritize their labor force for complex tasks that are business value. We see the power of customer experience automated by AI to really grow some legs and simple things that we’re of course tapping into today is calling and ordering pizza or calling your doctor’s office at any time of the day because you’re not feeling well and you want to schedule an appointment.
And it doesn’t matter if the office is open or closed because these AI agents are going to answer at whatever time of day it is, wherever you’re located, and they’ll help you get to your doctor’s office tomorrow or next week, as soon as you would need to. That’s really how we’re seeing it today. And I’ll tell you that there’s this huge opportunity in front of us to really start to connect what are the other unmet needs of consumers when they’re interacting with businesses? How can we better provide telehealth? How can we better provide feedback to those businesses about the experiences of your actual customers? And that, again, is somewhere else that IntelePeer is advancing. So beyond automating the customer interactions, we’re also helping businesses understand the unmet needs of their own customers.
Paula Rivera:
Very nice. And that’s so very clear, but it makes me wonder, while we’re helping businesses uncover and use these LLMs, on the flip side, there has to be some limitations, so what are the limitations that businesses should be aware of?
Matthew Caraway:
Yeah, there’s probably two key things that I always try to highlight. So number one, this one’s a lot like how humans are, frankly, that there are biases in the way that we as humans make our decisions. And in fact, large language models also have their own form of bias. And bias comes in many different ways. Sometimes I think we as humans think about bias in the ways that maybe we have seen us incorrectly interact with disadvantaged groups. But then bias might even just come down to if two people were grading the same student’s report, would they both give that student an 83% or would one person’s always going to be giving a B, and the other person is more willing to give A’s for students that earned it and C’s for students that earned a C? And in fact, we’ve discovered that with some of the different models that we have benchmarked.
We have literally compared multiple large language models for a specific domain, because we really wanted to make sure that as we surface these models in production, how are they going to solve needs for our customers? And then the second thing that I would talk about for limitations, it goes back to how models are built. And I said that they go find all the public knowledge that they have access to and they go train on that. Well, the public knowledge that existed in 2024 is of course different than the knowledge that exists here today in 2025. And then of course, in addition to your public knowledge, each company has their own proprietary knowledge, your own business rules, your own information that you have created through your intellectual property. So it’s really important for AI builders as they connect with various businesses to automate their workflows, to automate the customer experience, is to make sure that we provide the right context to the model at the right time, that surfaces the required knowledge. And this is something that we spend a ton of time studying and perfecting here at IntelePeer as we build our solutions.
Paula Rivera:
Wow. Well, bias, I have to say is rather bananas. But sounds like with the knowledge and the context, you can help break down some of that bias.
Matthew Caraway:
Yeah. And it’s actually really funny because we use AI to grade AI, and so it becomes its own meta problem of like, “Well, which layer of the system is exhibiting the bias? And are you aware of the bias or is it masked?” So yeah, it’s its own fun problem.
Paula Rivera:
AI, what’d you just say? We’re using AI to train?
Matthew Caraway:
We’re using AI to judge AI.
Paula Rivera:
So does it always give itself an A?
Matthew Caraway:
No, which is great. Yeah.
Paula Rivera:
Boy, I would love to judge myself.
Matthew Caraway:
Which is funny you say that because that’s part of the questions that are asked in the literature is this technique, we refer to it as LLM as a judge. And so we ask ourselves, if the thing that is being judged is an open AI model or it’s a entropic model, should the judge be the same model or should it be a different model? And it’s really interesting because I argue that the literature actually conflicts itself. And the current position that I have is that you probably want to benchmark it or you want judge it with a model that is not the one that generated the content. Because there’ll be bias itself because it’ll see things that it typically writes and says and it’ll say, “Oh yeah, that’s pretty good. I like that.”
Paula Rivera:
Right, right. Very much like what a human would do.
Matthew Caraway:
That’s right. I’m always willing to give myself an A.
Paula Rivera:
So let’s focus on agentic. I know we talked, I asked you about generative. How is it different from LLMs and maybe how is agentic AI different from generative AI?
Matthew Caraway:
So again, let’s just go back to the basics. First of all, typical agentic systems today rely on large language models and LLMs by themselves, they just produce text output, but they’re really useful in doing that. They can teach you things about various concepts. For me, they’ve taught me a ton about large language models and they taught me a ton about data science. They can allow you to write blog posts. Large language models can even serve as a virtual brainstorming partner. Think about some relationship conflict that you may have had at work, in your personal life, and I’ve literally sat down with a language model and I’ve described my position, I’ve described my perception of the other person’s position, and I’ve asked it to help empathize with the other person so that I can better understand how to be a good friend, a good teammate, a good husband.
So LLMs by themselves are incredibly useful, but they can’t act and they can’t perceive and they can’t make decisions. So that’s where agentic comes in. And if we just kind of boil it down, in fact, the World Economic Forum has put out some really good definitions about what is agentic, and they didn’t even base it on large language models. And what they described though is that they are AI agents that have the ability to sense their environment and take action on the environment while constrained to a specific domain and responsibility of expertise. Sounds like kind of a fancy term.
So if I give you an example, you could have an AI agent that has the responsibility of providing weekly food for your family. It could have the ability to sense its environment by reviewing a refrigerator. You could put a webcam in your refrigerator and it could see what food you have in there. It could track how long your milk has been in there and let you know that it’s expired, and it would find the things that you typically buy but aren’t in there. And it would know the kind of food that your family normally likes to eat. And based on the presence of what is in your refrigerator and is not, it could take action on that environment by generating a meal plan for you, and then it would call Walmart or it would call Amazon and it would purchase that food that you would need and it would be delivered to your house that afternoon just in time for dinner. So that’s agentic AI at work. It allows a system to take action on an exterior environment.
Paula Rivera:
Wow. I love that. And I haven’t really bought a refrigerator lately, but don’t refrigerators have some degree of AI in them?
Matthew Caraway:
I’m kind of scared because they do, and I haven’t gone to Home Depot and figure out, “Hey, what does this computer do?” And I don’t know, maybe it’s just like the tinkerer in me, and I’d rather go have a little home automation project on a weekend to stick my own webcam in there just to see what is it all about? How do I do it myself? Because then you get control over, “How do I define my grocery list? How do I decide Walmart or Amazon or some other vendor that I want to purchase food from?”
Paula Rivera:
Yeah, yeah. So that’s a wonderful example, and I’m now thinking about dinner. I’m like, “Oh, I got to get that chicken in the oven.” Now I need the refrigerator to prep the chicken and put it in the oven for me.
Matthew Caraway:
That’s when the robots are coming. Those are even better agents.
Paula Rivera:
Yeah, exactly. Exactly. So let’s bring this back to business, for those who aren’t in the business of manufacturing refrigerators. What are some real world examples of agentic AI in action, whether it’s for business or customer service.
Matthew Caraway:
We actually build these every single day here at IntelePeer for our customers. And those are examples where, sort as I described earlier, let’s say that I am a healthcare patient and I need to make an appointment. So instead of calling the doctor’s office, well first I open up Google Maps and I find their number and I check to see if they’re even open, and I wait until Monday morning at 8:00 AM to call them to see when their next available appointment is to hopefully get in on Monday and I have to talk to a human, and it’s just this kind of frustrating process, and I just want to schedule an appointment. Or maybe I want to pay my bill but I don’t have a computer in front of me, so I just want to call somebody and pay that over the phone. But if the human isn’t available, then you’re stuck as the caller, as that patient. Enter IntelePeer.
So we’re helping customers build solutions so that anytime of day, if it’s 2:00 in the morning and you realize, “Hey, I have a bill that I need to pay,” or you’re feeling unwell, no matter what time of day, you can call the phone number for your doctor’s office and you’ll be able to talk to an agent that sounds human-like. So you’re no longer talking to a robot, it’s no longer rigid. And they’re going to welcome you to their business. They’re going to ask you what sort of needs that you have. And it’s not going to be like yesterday where you’d press one for this, press two for this, and you don’t even listen to all those numbers and you hammer zero 100 times to finally talk to somebody who actually helps you, that’s really what we’re building for, is we want automated solutions that people want to use, that people are excited to call it any hour of the day to schedule that doctor’s visit for their sick child, and they get off the phone and they’re not stressed out about the poor phone experience. They can instead focus on their family and whatever needs they have in their actual lives.
Paula Rivera:
Yeah, no, I appreciate that. And I have to tell you, I think what we’ve done in IntelePeer, I’m getting into a little bit promotional, but also educational, one of the things that I think humans sort of struggled with, and we’ve spent a lot of time focused on this, which is what we call in the industry latency. So I think way back when, and even when generative AI kind of first came out, latency was sort of that time in between my saying, “Oh, I want to schedule an appointment,” and then long pregnant pause while the agent, the virtual agent is thinking about it. We’ve really done a great job of getting that latency down so it is very much like a human conversation.
Matthew Caraway:
That’s exactly right. And it goes beyond just saying, “Hey, we’re going to reduce latency.” But instead it’s about, well, what is a natural conversation? Think about the way that you and I talk back and forth. There’s pauses between each of us speaking because we want to make sure the other person is completed, but we also need time to think. I know when you ask me a complex question, I can’t just immediately generate the response. And so we have to make sure that when we ask a user or a caller complicated question, we give them time to respond. And then when the large language model needs to do something complicated, just like a human agent would, they need to go look up this caller a medical record system to see if you’re a new patient or if you’re an existing patient, that takes a little bit of time. So we’ve spent a lot of time again thinking through what is that actual experience that is fluent and natural for a conversation?
Paula Rivera:
Nice, nice. Okay, full disclosure, I was creating these questions when the heat index was over 100, and I probably had just read something and I thought to myself, “I really don’t know what this is, but I bet Matthew could tell me.” So this question might be a little bit out of left field and may not completely be on topic, but I’m going to ask you because I have you. And that’s about composable and task-specific virtual agents. Well, what is a composable virtual agent and how does this tie into this overarching concept?
Matthew Caraway:
It’s a really important concept actually, and it’s founded in principles of software engineering. So we actually have really cool terms or maybe even dry terms for this, and one of our principles literally is DRY, D-R-Y, don’t repeat yourself. And the idea is that when you build an application that does something really well, we’ll call those modules and we want to make sure that those modules are reusable so that when Engineer 1 or Engineer A spent a week or a month to build a really good component that solves this problem, like accessing a database in a specific way that’s compliant, that’s performant, Engineer B, when they need to access that database six months later, they can just leverage that same component, that same module, and they don’t have to repeat themselves, remember DRY, don’t repeat yourself.
So we apply that same technique when we’re building agents. You identify these agents that solve the same task. While you may have medical agents that go interact with systems and solve problems, one of those problems for a medical agent may be collection of payment information or collection of address information. Yet when you’re looking at other customer service use cases like ordering over the telephone for a food service or ordering your groceries, those same needs need to be solved. You need to ask the user, “Hey, are you going to pay by credit card or would you like me to send you an SMS link?” You need the ability to say, “Where do you live? Where am I going to come pick you up from?” Or, “Where am I going to drop off your food at?”
And it’s actually interesting, through our studies and our actual build out collecting address information is incredibly complicated. Not only because street names come in different ways, but phonetically, we have many streets that sound the same. And if anybody has spent time in Hawaii, those are where the craziest names in America are. And it’s because the lingual origins of Hawaiians is so distinct from Anglo-Saxons. So when we have Main Street, USA, they don’t typically have Main Street, Hawaii. And so we had to build agents that were really good at address collection for Hawaii. And that’s an example of various ways that we’re using compostable and task specific agents that then orchestrate into the overall experience that we’re providing for callers.
Paula Rivera:
So would it be fair to say for a business leader who may not really know a lot about AI, but they have to fake it until they make it, shall we say, this is probably something they want to ask about. I think if I were using a company, I would want that company to create AI for me. I would want that company to be using composable agents when appropriate, because otherwise you’re just paying somebody to recreate the wheel again.
Matthew Caraway:
And it’s going to be unreliable, they’re not going to have testing for it, it’s not going to be trustworthy. The results that you get today won’t be the same ones as tomorrow. And that’s why you actually pay enterprise level engineered solutions to solve these problems.
Paula Rivera:
Well, I’m glad the heat made me ask this question.
All righty, so with great AI comes great responsibility as we all know. I think Matthew has laid out a lot of that in great detail. Let’s talk practicalities, what to consider when implementing AI in your organization. So what are the key benefits of using AI in customer experience workflows, Matthew?
Matthew Caraway:
I actually love to approach this from the user’s perspective. When I talk to my friends, I get all excited about, “Hey, we’re building AI stuff. Let me tell you what my company does.” And when I tell that story, I focus on a story that’s impactful for my friends and I talk to them about the last time that they called a business and how frustrated they were with going through that phone tree, the frustrations that they had with waiting on hold, getting to speak to an agent and realizing, “Oh, I’m in the wrong department. I’m going to have to transfer you to somebody else, and you have to wait on hold again.” We hate doing that. We as consumers have been held hostage when we’re calling our phone providers, our internet providers, our banking companies for decades, and the use of generative AI and agentic systems for customer service, it allows businesses to create a better brand for their own users so that at any time of the day, I can call the phone number to schedule a doctor’s visit.
Or even think about in the financial world, we’re not there yet in terms of a society, but we’re trending there, to where you’d be able to call the phone number for your bank, and without talking to a human being, you’re going to be able to tell them you need a bank loan for a new Ford F-150 because you’re at the car dealership, and they’re going to know who you are. They’re going to know that you’re good for your payments. They’re going to do an ID verification to assert that this is exactly you, and then you’re going to walk back into Ford and you’re going to say, “Yeah, check’s in the mail. They got you.” And you did it in just a few moments working on the phone with an agentic system.
Paula Rivera:
So you just brought up something that I think is really interesting. And recently I was watching a news program. It was sort of one of these talk style programs, and it was kind of lighter segment. But there was an article out, and I believe it was in the Wall Street Journal, I’m not quite sure, oh, it might’ve been The Atlantic, that talked about how some companies are using what they’re calling sludge in their customer service environments. And basically they’re intentionally creating a bad experience for people when they call in. And that might be, “Let’s create a bad experience so they go use the computer or use their mobile phones and stop calling the 800 number.” But I think in this day and age, that’s completely bananas, shall we say. Have you heard of this, and I’m glad to say you think it’s horrific, but perhaps not.
Matthew Caraway:
No, I do agree it’s horrific because as consumers, we have this vast choice today of different companies that we can do business with. We have choices today that we’ve never had before. And prior, there was this stickiness or lock-in that we had. You had your car insurance. And I just think about my parents like, “Oh, I worked with State Farm for years.” And for me, no, I didn’t work for some company for years. I didn’t have insurance with some company for years because I liked them and I loved them. No, I do business with a company because they treat me well. And if there are moments that I go to a business and it’s painful to give them my money to receive benefit from them, then I’m going to go somewhere else. And any company that intentionally makes their phone system painful, I’m going to look for their competitor as a consumer.
So businesses really need to look for all of the different channels and environments that their customers are going to come do business with them in. So many years ago, they tried to optimize their brick and mortar experience, and then they tried to optimize their digital footprint with e-commerce websites. And then I just see tomorrow and in the future, we’re going to be talking about the power of voice. We’re going to be talking about the power of agentic systems through different interfaces so that we can go to Amazon in different ways than we did before. We can go to our banks in different ways than we did before.
Paula Rivera:
I so appreciate your viewpoint, and when I was listening to this segment, I was like, “In this day and age, no company should be operating like that.” So thank you for providing your insights on that topic, the topic of sludge. So what challenges should leaders anticipate in rolling out their AI initiatives?
Matthew Caraway:
Yeah, I’d recommend several things. Number one, just recognize it’s hard. And depending on what you’re trying to achieve, if you’re trying to create cultural transformation internally, it’s important to create an environment of trust and safety and an environment of experimentation. We do that a lot here at IntelePeer. We encourage people to be building prototypes using AI systems. We encourage people to be using AI to solve problems for their job in unique ways or even to solve problems that maybe aren’t directly related to their job, but provide benefit to the business.
And then one of the other key challenges I would tell you is don’t go it alone. Building AI is really, really hard and it’s going to be filled with a ton of mistakes. So if a business decides to go build their own AI workflows and their own AI automation, and they don’t have that experience before, they’re going to fail many, many times before they can succeed. So by the benefit of partnering with people who have done this at scale repeatedly, you really get to stand on the shoulder of giants. And that’s the benefit that I, as an AI product manager, want to bring to my customers. The things that we’ve learned in our labs, you get to benefit from our knowledge so that when we arrive at your door to help you solve problems, you don’t have to repeat yesterday’s mistakes. We can just accelerate value for you.
Paula Rivera:
I love it. So let’s kind of bring it on home and talk about what leaders need to walk away with. If there’s one thing you wish more executives understood about AI, what would it be?
Matthew Caraway:
Oh, man. That may be the toughest question here. Yeah, and I’d say it goes back to that it’s so hard and that there’s no magic key and that you should spend a lot of time thinking about what does your business need for AI? Go back and study your problem space. What is the actual problem that I’m having? And don’t just apply AI like its peanut butter. Build real intelligent, scalable solutions that have proper governance and change management associated with them. And by going through those established practices, you can apply emergent technologies to solve problems in new ways.
Paula Rivera:
Excellent. What’s the best first step that leaders should be looking to dip their toes into AI? What’s the best first step that they should take?
Matthew Caraway:
Oh, yeah, continued tough questions. First step you got to take though, is this even an AI problem? Just goes back to finding the right tool for the job. And then assume that it is an AI problem, figure out what’s the cost of failure? Is this something that I need to keep in my business? Yeah, so in some cases, you might have a build decision. But then there are going to be many things which aren’t your company’s core mission. So that’s a classic case to find vendors that have that domain of excellence so that you can benefit from what they bring to the table.
Paula Rivera:
Excellent. I like this benefiting from the knowledge of others. So for those leaders who are looking to future-proof their strategies, what can they do as this tech continues to evolve? Because it really does seem like every day there’s something new, there’s something better, there’s another wow factor being announced.
Matthew Caraway:
Lean in, be curious. Know that this is the fastest that the world has ever moved in terms of innovative technologies. And to be willing to anchor yourselves in moments in time when it’s acceptable. We have to be willing to accept good enough. So if you use AI to solve a problem today, and it’s working, just because a different AI solution comes out tomorrow, that doesn’t necessarily invalidate what you’ve achieved today. So don’t spend your time chasing the latest shining obstacle. Make sure to continue to measure against business results and demonstrate if you’re achieving your business outcomes.
Paula Rivera:
Nice. Alrighty. Well, we’re kind of winding down here. And before we say goodbye to Matthew, we like to do a round of rapid fire questions. These are fun and insightful questions to get to know Matthew beyond his title and his day job, which he definitely brought to life for us today. So Matthew, I’m going to ask you three questions. You can give me a one word answer or you can expound upon your thought as you deem appropriate. So first question, what’s your favorite fictional AI character from TV or film?
Matthew Caraway:
I’m going to have to defer that one. I’m the weirdest guy in the company. I don’t have a television in my house. So I don’t have a favorite TV or film character.
Paula Rivera:
So I’m glad to say you’re a reader and you must have a book?
Matthew Caraway:
I read a ton of books on really geeky topics like data science topics and data science books. Weirdly enough, they’re not fictional topics. I’m an athlete, so when I’m not here at work building AI products, I spend a lot of time outdoors training.
Paula Rivera:
Ah, okay. So that’s question number three. And I know the answer for question number three, I think. But let’s just quickly do number two and then we can get to a fun question or a more fun question. So what’s one tech tool or a gadget that you can’t live without?
Matthew Caraway:
Yeah, so this one is awesome. I just discovered it within the last year. It’s what we call a Zwift bike, and it’s an indoor bicycle that’s a smart bicycle, and it allows you to connect to a virtual world and ride your bicycle against people all over the world. And it could be 10:00 PM tonight, and you could be racing people in South Africa where it’s 9:00 AM their time tomorrow. And I really like it because it produces kind of this gamify-centric nature of exercise. I’m a really competitive person, and I also love they’ve continued to introduce AI features into Zwift. And they’re not large language model type features, but they’re more of the traditional AI and ML features so that you can build workout programs so that over the next 12 weeks and 14 weeks, if there’s some goal that you have in mind, the intelligent program from Zwift can help you become strong in the ways that you’re trying to become strong.
Paula Rivera:
That’s great. It really is great. And I think both of these questions have teed me up for what I think I know what your answer is for the last question. But one non-tech hobby or interest that helps you recharge?
Matthew Caraway:
Yeah, my life outside of work is centered around racing dirt bikes. This is something I’ve been doing for pretty much my entire life. I’ve been racing for over 20 years and riding for close to 40 years now. I went on a honeymoon with my wife, we raced in South Africa. I’ve raced half of the states in America, and I’ve raced in multiple continents. And yeah, it’s definitely what drives me when I’m not here at work.
Paula Rivera:
Wonderful. And I believe you just took a week off because you were competing in a race, and I need to know how you did.
Matthew Caraway:
Oh, I did great. It was a world championship that came to northern Idaho a couple weeks ago and had a fantastic time at it.
Paula Rivera:
Wonderful. I’m glad you did well, and I’m glad you came back unscathed. That always makes me happy.
Matthew Caraway:
Try to.
Paula Rivera:
Well, Matthew, thank you so much. I really appreciate having you on. It’s always a pleasure talking to you. I walk away a wee bit smarter.
Matthew Caraway:
Thank you. I appreciate your time and I’m humbled to be here.
Paula Rivera:
Wonderful. We definitely will have you on again.
I want to thanks Matthew once again for joining us today and helping break down the building blocks of AI for today’s business leaders. From LLMs to agentic AI, it’s clear that understanding these tools isn’t just a tech issue, it’s a business imperative. Join us on the next episode where we dive into AI implementations and break down some of the cultural changes that need to take place within your organization as you roll out AI. And of course, be sure to subscribe to The AI Factor wherever you get your podcasts. And visit intelepeer.ai to learn more about how we’re helping organizations turn AI into ROI. Until next time, stay curious, my friends.
About this episode
In this episode of AI Factor’s CX ExplAIned: From LLMs to Agentic AI – What Leaders Need to Know, we sit down with Matthew Caraway, Senior Product Manager of AI Applications, to break down the essentials of AI for business leaders. The conversation explores the evolution from Large Language Models to agentic AI, highlighting how these technologies can drive smarter, more personalized customer experiences. From debunking AI myths to discussing implementation challenges and benefits, this episode offers practical insights for executives looking to navigate the AI landscape with confidence. Whether you’re just getting started or scaling AI in your organization, this is a must-listen for anyone shaping the future of CX.
For those looking to understand how AI is shaping the future of CX.