Episode 2: Beyond the bot: How conversational AI is redefining business communication
Paula Rivera:
Welcome to The AI Factor, where we decode the power of artificial intelligence for the real world of business. I’m Paula Rivera, and today we’re diving into the world of conversational AI, what it is, how it’s evolved from the clunky chatbots of the past, and what it means for the future of customer engagement.
Joining us today is Drew Popham, a seasoned solutions engineer who works directly with organizations deploying these technologies. Whether you’re new to AI or already experimenting with it, this episode will give you a fresh perspective and maybe a few laughs along the way.
Drew, welcome.
Drew Popham:
Thank you so much. I like being called seasoned versus senior as a solutions engineer. Seasoned is a nice way to put it. I appreciate that.
Paula Rivera:
Well, you’re very welcome. That was a bit of a tongue twister, I have to admit.
So in a recent experiment, an AI model was trained on Shakespearean, rewrote modern movies in verse. Think The Fast and the Furious as a Shakespearean drama. If you could have AI rewrite any business tool or workplace process in the style of a Shakespearean play, what would you pick and why?
Drew Popham:
That’s a good question. I’d probably go with making my… email generation. I’d like to be able to have my email responses go out and be much more eloquent and just dramatic in the style of Shakespeare, I would say. So, doth thee share your use cases?
Paula Rivera:
Oh, I love it. I love it.
Let’s dive on in. We’re going to rewind the clock a bit, and before we had AI that could schedule appointments or hold a natural conversation in a Shakespearean style, we had Clippy, rules-based chatbots. Let’s discuss how we got from basic bots to intelligent agents. Can you tell us, Drew, how did chatbots first emerge in the business world?
Drew Popham:
Well, I think before we even get to chatbots, we might have to explain to younger people who and what Clippy was. We could also just let them look that up on our own. Those of us who are in the know, understand how interesting Clippy was and really just chatbots in general.
Chatbots really started out being a simple, rules-based system for question and answer, right? So it was really out there to just answer FAQs, handle customer support, assist with any technical questions, and it came with a set of preset responses. So it was something where it asked you if you needed help, you had to enter in what you needed help with. If one of your words or phrases matched something in part of their text, it fed you that information. It was pretty hit or miss on what you would get back and what kind of success, but that was really the beginning.
Paula Rivera:
That’s super interesting. Give us a snapshot about how chatbot technology has evolved over the years.
Drew Popham:
Sure. Chatbots, again, we just set the stage where they started, but then there was the determination that as this began to grow, add more sophistication through giving it more knowledge as it were, being able to look through and understand what people are saying, really adding what we call natural language processing. If you’ve ever heard of something called NLP, being able to understand and respond to queries that people are asking and then really becoming a more human-like experience. And we’ve come all the way up to today with ChatGPT a few years ago and now all the models that have come out of deep learning and large language models.
Paula Rivera:
It’s interesting that you say, “Giving the model more knowledge.” On last week’s episode, I believe Mike Rapp and I were talking about garbage in, garbage out. The bots need to be trained, or the AI model needs to be trained with good knowledge, shall we say? Would you agree with that? I think most people would. They call it clean data. Perhaps you could shed a little bit of insight on that. I’m throwing you a curveball from our discussed questions.
Drew Popham:
Yeah, I think that that’s accurate to say, “Garbage in, garbage out,” and I think more and more of that is going to… As models become more sophisticated, one of the things that the industry as a whole, in my opinion, fights to do is to deal with that kind of data because once you put information in, it’s hard to extract that out from the model. And so it’s one of those things where you need to do that not only to have clean information, accurate information, but also to remove things like bias from a model.
And so that is important. As new models are created, they’re careful on what they put in. As large language models were being developed they were just shoving as much data in as they could to get their end result. And so it’s very much a garbage in, garbage out. But as models become more specialized and fine-tuned, we’ll see cleaner models come out and hopefully as well with reduced bias to them.
Paula Rivera:
The term chatbot and conversational AI are often interchangeable, but they’re far from the same thing. Drew, clear the air about what really sets conversational AI apart from your average chatbot. What is the real difference between a chatbot and conversational AI?
Drew Popham:
I think it’s important to address one thing in our conversation, so as we go forward, and that has to do with the term conversational AI because conversational AI was really where the industry was at before ChatGPT came out, and then we went on the generative AI path. So conversational AI was what was Amazon Lex, Google Dialogflows, IBM Watson. Those were conversational AI tools.
Now, conversational AI has become a different term so that when we’re talking about it, we’re talking about actual conversation, like an AI that you can have an actual conversation with, and generative AI tools are what power those. So I just wanted to address that up at the top because it can be confusing for some people, especially in the industry as we’re talking about it.
Now, a chatbot to a conversational AI powered by generative AI, it really comes down to chatbots were really predefined rules and responses. So they’re predictable tools, they don’t understand context, and they really don’t have any nuance and understanding of what’s being asked for them. It’s keyword, key phrase in, result comes out, results may vary. Generative AI or conversational AI conversations have the ability to… they have much, much larger data sets, but they also generate responses based on context or content that’s being given to them, and they produce it dynamically. So it’s not a question and answer pair tied together. It’s more a dynamically created response based on the context of the conversation. That’s really where we get a much higher user acceptance.
As time’s going on, what we really see is that they’re just going a little further. People calling in, their whole understanding is that either the chatbot way or even the older conversational AIs that I referenced earlier where they have to say a specific word or phrase and hunt their way through to find the correct way to frame what they want so that the AI gives that back. Chatbots were the first part of that, and then we went through and they became more advanced, and then we ended up with conversational AI models that we talked about.
Now, conversational AI as it stands now isn’t like that. You can give ambiguous responses and the AI, being able to understand context and more nuance, has the ability to come back and understand your intent based off of that and give you the answers that you need or move you through the conversation to get you to what you want.
Paula Rivera:
Excellent. You had my mind going in a few different places with that. One of the things I was thinking about was the hallucination element, and it does feel like since generative AI came onto the scene, which I know conversational AI uses the generative AI element, but since that came on the scene, what, two years ago or so, it definitely seems as if the hallucinations or the responses have been getting better and better. Is that fair to say or am is the system just getting used to me?
Drew Popham:
You know, I think people are getting used to talking in a more normal way with AIs, so I think they become more comfortable with it. I think hallucinations from the models improve overall when it comes to just the quality of understanding and responses back. So I think hallucinations do become better, but I think our ability to deploy conversational AI has become better in the way that we implement with design, where we talk about things like agentic AI, where you have multiple AI agents handling a conversation.
Many different ways can be interpreted on what agentic AI is, but at its core, at its most basic, there’s an AI agent that’s handling the beginning of the conversation and monitoring it throughout, and then bringing in other AI agents that are more specialized and focused on a particular task. And that way, we’re limiting the scope of what each agent does, keeping it more focused on each task, and you have a much, much lower hallucination rate when it comes to that to achieve specific tasks and a much, much higher just overall success rate and understanding and completion of tasks and moving on to the next piece. And we can break that down, each process down, as small as we need to gather information, understand where we’re at in the conversation, and then move us along into success of a particular task or intent and then moving on to the next task or intent.
Paula Rivera:
Interesting. Interesting. How does conversational AI adapt across channels like voice and chat?
Drew Popham:
When we talk about different channels, we are primarily focusing on voice and SMS, maybe some chat as well when it comes to interaction. It all boils down to the way that we’re giving information to the AI for us in our deployments.
For voice interactions, we’re using speech recognition and natural language processing to understand what people are saying, translate that from the language that it’s in into English, or translate that just into text. Actually, we don’t even have to translate to English. We can give the AI the native language that it is, but we do have to take it from speech to text, hand that to the AI. It processes that, gives us the response that we then move back into voice.
So we focus on the harder piece and the real-time interactions, especially of voice. There’s quality of audio, quality of speed, quality of response, and task completion that we all have to focus on. Things like text and chat are more longer spaces in between responses. We don’t have to worry about the audio piece. We don’t really have to worry about understanding what people are saying because they type it out, and so then we don’t have to worry about those pieces. We just have to worry about mainly context and then accuracy of response coming back.
So similar across all channels for the most part, there’s just different layers for us is really speed of response, and then if it’s just a pure text engagement, really accuracy and response, I would say.
Paula Rivera:
Okay. So, two questions. One of which, speed of response, have AI systems improved in responding? Because I think one of the consumer hesitation with AI is when there’s this long, pregnant pause.
Drew Popham:
Sure, absolutely.
Paula Rivera:
And you’re wondering what’s going on in the background and what is this machine even doing? Has that improved over the years, and what would be an acceptable response rate?
Drew Popham:
Sure. So yes, it’s improved greatly over the last couple of years. When we’re talking about that real-time outreach and response, we’re constantly in a process of research and development on our end of the AI models that get us the accuracy that we need for our responses that we get back, but also reliability in a fast response. Some AI models give very, very good-quality understanding and knowledge and responses, but they might take four or five seconds to come back to you, and in a voice conversation that’s wildly unacceptable.
And we’re going through the process of not only understanding what somebody has said, taking that from speech to text, and then once we get a response back from the AI, going back from text back into speech, and then playing that back to somebody. And we need to be doing that around three seconds in that interaction, otherwise it becomes a little bit too long.
So there’s other ways you can mitigate that where you have, just as when you’re having a conversation with somebody, if you’re on the phone, you’re talking to somebody, you give them a piece of information and they’re saying, “Thank you. Just one second,” and you hear them typing, we can add those kinds of things, but our goal is to really not have to do that. We want to have a natural back and forth in a conversation. We want the AI to be responsive. And so there’s a couple layers that we’re doing.
So the AI’s that we’re testing, they have to meet those requirements, otherwise they’re just not going to work for us. And then also, when it comes to natural language processing and understanding what people are saying, we have to have high accuracy and fast response both, and be able to process that into a voice that sounds natural. We don’t want to fool people with the voices that are out there. We want people, it’s fine if people understand they’re talking to an AI agent, but we want them to seem natural and not distracting and being able to move through the conversation quickly.
Paula Rivera:
That is something I’ve always wondered about. Thank you for explaining that. The second question you sparked in me, which you might be done with being sparked by your discussion, but this is-
Drew Popham:
No, it’s good conversation.
Paula Rivera:
Yeah, it really is. You made a comment before that struck me, which you said if someone’s writing a text, let’s say, it’s, I think you said easier, but it’s more straightforward for the AI model to look at it, understand, respond back. Are AI models able to tell, gauge emotion in someone’s voice? If someone’s calling in and they’re irate, can the AI model detect that and can the AI model actually work to defuse the situation?
Drew Popham:
So what you’re talking about is something we call sentiment, and there’s other pieces. This is understanding what’s happening in somebody’s voice where they’re at.
One of the ways we detect is just the words people are saying obviously. If we’re getting back a bunch of curse, that gives an indication that somebody’s not so happy about something. And as well, if, as we’re talking to somebody, the sentiment begins to get worse, we can detect that and then make intelligent decisions based on that.
And a lot of the time, our goal, our goal is also always to automate as much as the person we’re talking to wants to. But if somebody is becoming frustrated, our typical recommendation to our customers who we’re supporting their customers through automation, our typical recommendation is, “Hey, if somebody’s becoming frustrated, if we’re not understanding somebody for some reason and they’re becoming frustrated, or if they just get on the phone and they’re not happy, we’ll get them to a person.” We want people to have a good experience, but we also don’t want to be a barrier to getting them where they need to go.
So if somebody calls in and they are just frustrated, what we would prefer to do is rather than frustrate them further by talking to AI and being a barrier to a solution, we want to get them to somebody typically who can assist them and get them what they need quickly and really mend that relationship, and I personally don’t see AI as the way to do that.
Paula Rivera:
Yeah, one of our colleagues here at IntelePeer likes to call it the graceful exit.
Drew Popham:
Yes, exactly.
Paula Rivera:
When the AI is like, “Okay, time for a human.”
Drew Popham:
Yep, yep. Absolutely, absolutely.
Paula Rivera:
Excellent. Thank you. I appreciate those explanations. It’s very clarifying for me.
Let’s jump into the business world. For those businesses who are beginning to use conversational AI, they’re often surprised by the technology’s capabilities and they find that it’s more than just a better FAQ bot. What platforms and tools are leading the way in conversational AI right now?
Drew Popham:
For us, we use OpenAI models, so we see their products and tools to be the right spot for us, but at the same time, we’re also open to other platforms. What we use allows us the flexibility to put whatever model makes the most sense for the specific use case and responses for the customer.
Now, there’s other platforms and tools. If somebody said, “Hey, I want you to bring in Gemini or Claude or Llama,” we could certainly do that within our environment. If they’ve spent a bunch of time developing something internally and they said, “Hey, we don’t have the voice capability for this, we’re using it internally, but we want to be able to leverage our own model that we spent a bunch of time customizing,” we would find a way to tie into that and use that. So really, I would say the top platforms are really overall the best ones that are out there.
Now, there are a lot of interesting things happening with small language models that are directed to more very specific and detailed tasks that are very interesting and fast and things that we look at as well. And then there’s also just total point solutions that are developed with a single focus. Some organizations have done that as well to great success, but it’s also limiting and not necessarily customizable for what we see across the board.
A lot of our customers, while a lot of them have the same issues and challenges, whether it’s scheduling a patient or scheduling a customer or “what’s the status of my order” or “I need to make a payment,” they all have those same type of use cases and problems, but the way they implement those and the business rules around them are very unique between organizations. Even within very, very similar verticals, the way that they do it and the way they want to do it and engage with their own customers or patients is unique.
And so what we find is that our tool allows us to be customizable in the way we implement and satisfy the same use cases, and the models that we’re using from OpenAI have really achieved everything that we’ve needed from them and more.
Paula Rivera:
Excellent, excellent. What about integrations, and do you see a lot of companies needing integrations, and what integrations are the most important?
Drew Popham:
Yep, that’s a good question. So when we talk about integrations, it’s use case-dependent. When we talk about integrations, typically we prefer to integrate via REST API into wherever the data is that we need, whether that’s a CRM, a patient management system, an EMR, something like that, or a data storage, wherever their data stores are. What we find is typically we prefer to get there via REST API or direct integration through something like that. Sometimes that’s a challenge.
We have customers who have very, very old systems, and we have some that if you have a cloud platform that you’re working on, typically those are developed with newer, fresher APIs. They have to integrate with other tools, which is sometimes much easier.
I will say we haven’t run into a situation where we haven’t been able to get to the data that we need, but again, we outline, based off the use cases, what information do we need? If it’s a patient, can we get to a patient record? Can we pull up what we need to identify them, authenticate them? Can we get to the schedule? Can we actually push information into the schedule?
And so we really sit down and break all those apart. Our goal is to really make it efficient from a data access and acquisition for us, and then also make it accurate in the information we’re putting back in.
Paula Rivera:
Ah, interesting. You just mentioned two common use cases, which I believe was scheduling, and I was going to say appointment-making, which is kind of the same thing as scheduling depending on your viewpoint. What other common use cases are you seeing with customers?
Drew Popham:
“Pay my bill.” “Pay my bill” is a big one. “What’s the status of my order? I want to place an order.” Those are all really common. It’s all very vertical-dependent. We do have a lot of financial services as well that are interested in saying, “Hey, what’s the status of my mortgage?” So rather than taking up a person to go out and find that information and give that to somebody, they want an AI that can look that up, give a response back, say, “Hey, we have your mortgage application. We’re still waiting on a validation of what your latest pay stub is. Would you like us to send you a link so you can upload that?”
And so being able to tackle those kind of multistep interactions is something that people are largely looking for. I mean, FAQs, like you said, are not where the savings are when it comes to driving efficiency in an organization. They need a multistep process, a tool that can handle those types of things. And that having the data, as we talked about just a second ago, for the integrations allows us to be able to do that.
Paula Rivera:
I love the “pay my bill” example, and every time I hear that or I see a use case for bill payment, I always kind of chuckle. I’m like, “Well, we’re keeping finance happy.”
Drew Popham:
Oh, exactly. But what’s interesting is that if you’re calling somebody… A lot of the times what we’re seeing for “pay my bill” is that if people can pay with an AI, or if we’re reaching out to say, “Hey, your bill’s a few days past due. Can we get that paid?” people are much more likely to engage with an AI to make especially a late payment versus a human because there’s no guilt associated with talking with an AI. But if you’re having to talk to a person, you’re exposed to an unreasonable feeling of guilt, but you still feel it for some people.
And so being able to do that with an AI, it’s much more transactional for them to be able to just do that and move on with their day and not have any extra feelings about it.
Paula Rivera:
Huh, the psychology.
Drew Popham:
Exactly.
Paula Rivera:
Very interesting. How are organizations measuring success, Drew?
Drew Popham:
Typically, they’re measuring success the main three ways, which is how are we driving efficiency, so reducing cost, how are we improving customer experience, and then lastly, how are we driving revenue? A lot of the time before generative AI, it was how are we reducing costs and how are we improving customer experience? But now our ability to drive revenue, to have AI agents that can do multiple steps, make recommendations, upsell people on something, it has really opened up a whole avenue to being able to actually not save money, but actually drive new revenue across an organization.
Paula Rivera:
Excellent. Excellent. Let’s jump into the surprises. What are some of the wins and a few laugh-worthy moments from your experiences? Maybe a favorite customer success story.
Drew Popham:
Well, one is, and we tell this story on some of our engagements when we’re talking with new customers, is we’re taking calls for a national pizza chain. And what we were finding is we were having great success with people calling in and saying, “Hey, I’d like to order a pizza.” “Great, let me grab your information. What type of pizza would you like to order? You want pepperoni, you want all your vegetables, all your toppings,” whatever. And we were having great success with that across the nation, with the exception of, for some reason, there was a state in the South where we were not having as high of a success rate. Everywhere else, it was 65%, but for whatever reason, this location was about 20 to 25%. So we thought, “Hmm, let’s dig into our analytics and find out what’s going on.”
And what we found is that in that area of the country, people were calling and saying, “Hey, I’d like to make a pizza.” Well, there was no training for the AI to understand that “make a pizza” means they wanted to order a pizza. The AI would interpret that as, “Well, I can’t make you a pizza, Caller, but I can give you some dough.” Well, not “I can give you,” but, “you can get some dough and some sauce and some cheese and you can make a pizza.”
So it was staying within the bounds of what it was allowed to do. It wasn’t actually saying it would give them all those things, but what we found is that even through all of the testing, there’s still things that we find when it comes to how people talk, specific turns a phrase that may be unique to certain parts of the country will still sometimes pop up. And when we went in, all we had to do is to tell it, “Hey, if somebody says, ‘I want to make a pizza,’ what they’re saying is they want to order a pizza,” and then that issue went away, statistics went up in that location, and we moved on to the next case, but that’s one of the fun and interesting things that shows up.
One other interesting item that we were seeing is that our platform is also multilingual. And so what came out of our information was that people who, in this instance, spoke Spanish, were automating at a much higher rate than English speakers. We thought, “That’s interesting. Let’s dig into that a little bit.”
And one of the items that came out of that was as somebody who maybe whose primary language is Spanish, being able to transact in their native language is a much more attractive option than getting transferred to a human or to an office somewhere where their odds of speaking with somebody in their language are lower. And so they will actually stay within automation and go through that process, maybe be a little bit more patient, and transact fully through the automation because they can do that in their native tongue, which was very interesting to us.
Paula Rivera:
That’s super interesting. I so was expecting you, when you were talking about the pizza, to say it was New Jersey because so many people in New Jersey just call it a pie.
Drew Popham:
Yeah, that’s interesting.
Paula Rivera:
When I was up in Jersey, it would be like, “Hey, I want to order a pie.” So that’s very interesting about the South. I love it.
What are some ways businesses are adding personality or creativity into their AI?
Drew Popham:
One of the ways that we’re doing that is we encourage people, as we’re going through design, to give their AI a name and personality. And that can be if they’re talking about multiple locations or multiple parts of their business, they can configure as many of those as they want and give it the personality and voice that they want.
And so it’s fun to watch how people or teams determine a name that they want to give their AI, and they’ll have it introduce them saying, “Hi, my name’s Ava. I’m your virtual dental assistant. How can I help you today?” And watching to see how they customize that, and then also just customize what are the things that they want to do and they want to have, what do they want to allow the AI to be able to handle? And then what do they say, “You know what? We want this to be a human experience. So we want the AI to handle X, Y, Z of use cases, but then we want somethings to still be human,” or if they want it to be human, sometimes they’ll say, “we want you to gather a little bit of information first. Send it to us when you send the call to us and then we’ll engage at that point.”
And so seeing how they do, not only just trying to automate the whole call, but also maybe make it more of a hybrid experience, what’s interesting is it allows their humans to be able to spend more time on more critical things. Because if we’re handling a lot of the simpler pieces or maybe a lot of the parts of the call that we just have to gather some information and then give it to a human, it allows the people that they can upskill, give further training, and really keep their best core people who give the best service to their customers, they can handle more engagements, they can handle the more important engagements, and just improve the customer experience altogether.
Paula Rivera:
I love it. I love it. Speaking of being more human, let’s get a little bit into who Drew is.
Drew Popham:
Oh, boy.
Paula Rivera:
I’m going to ask you a couple of rapid-fire questions. Say the first thing that comes to mind. What’s a phrase you wish chatbots would stop saying?
Drew Popham:
“I’m sorry. I didn’t get that. Can you repeat it?” And nothing drives me crazier than an AI that can’t understand you and that is going to repeat that over and over and over and over again. Every airline that I call will say that to me. I need a contact at an airline, please, that I can help them with that.
Paula Rivera:
I think you’re in good company with that one. If conversational AI were a pop star, who would it be and why?
Drew Popham:
Oh, that’s a good question. Who would that be as a pop star? See, you got me. I don’t listen to a lot of pop music. Beyonce? I guess we can fit just about any use case and she’s a multi-threat in multiple types of music, I guess. I’m not sure.
Paula Rivera:
Yeah, I like that response. I think she’s very versatile, so I think you probably hit the nail on the head with that one.
Okay, last question. Would you rather have an AI assistant that always gets your coffee order right, or one that can handle your emails without supervision and not in Shakespearean language?
Drew Popham:
Emails without supervision. I’ll take care of my coffee. If they’re handling emails, I got all the time I need to make my coffee exactly how I like it.
Paula Rivera:
I love it. Drew, I have so enjoyed speaking with you. I think we had a pre-meeting and I crumpled up the questions and threw them out halfway through this conversation, so you rolled with the punches quite well. Thank you so much, Drew.
Drew Popham:
Absolutely. This was a lot of fun. Happy to do it. And I’d like to talk again another time, anytime. Just let me know.
Paula Rivera:
Wonderful. Conversational AI isn’t just a buzzword, it’s a business enabler. From cutting support costs to boosting engagement, it’s changing how companies connect with people. A huge thanks to Drew Popham for joining us and shedding light on the realities of this technology.
Be sure to subscribe to The AI Factor, and if you liked today’s episode, share it with the colleague or drop us a review. Until next time, stay curious and keep exploring The AI Factor.
About this episode
Diving into the world of conversational AI — what it is, how it’s evolved from the clunky chatbots of the past, and what it means for the future of customer engagement. Speaking with IntelePeer’s Drew Popham, a seasoned solutions engineer who works directly with organizations deploying these technologies, we discuss the difference between conversational AI and chatbots, tools in the marketplace, and how businesses are using the technology.
For those looking to understand how AI is shaping the future of CX.