Paula Rivera:

Welcome to AI Factor, the podcast where we decode the future of artificial intelligence and explore how it’s reshaping the way we work, communicate, and innovate. I’m your host, Paula Rivera, and today we’re diving into one of the most anticipated AI releases of the year, OpenAI’s GPT-5. Joining me is Matthew Caraway, AI product manager here at IntelePeer, to help us unpack what makes GPT-5 such a game changer, what it means for businesses, and yes, we’ll even touch on the Reddit backlash that’s been making waves. Let’s get into it. Matthew, welcome. 

Matthew Caraway: 

Thank you. I’m excited to be here. 

Paula Rivera: 

I love having you on. I always walk away a wee bit smarter, which is nice. So GPT-5 isn’t just another upgrade. It’s a complete rethinking of how AI models work, from deeper reasoning to real-time routing. It’s designed to feel less like a tool and more like a trusted colleague. Let’s start with the basics, Matthew. What is GPT-5 and how does it differ from previous models like 4 or 4o? 

Matthew Caraway: 

It’s fascinating because you kind of got to peel back the marketing and study what actually is occurring. In fact, as GPT-5, it’s a family of models. We can classify them roughly as a small, medium and large in terms of the capabilities that they bring and the size of these models. And it’s even more fascinating because depending on the product such as OpenAI’s ChatGPT product, they put a router in front of it, and the goal of the router is basically to say, “Well, not every time the user asks a simple question, do I want to send it to the largest model.” So it allows really this optimization where we can strike that balance with latency, inference, cost, and intelligent responses. I just think to myself so many times it’s like, I open up ChatGPT and it’s like, hey, help me spell this word correctly real quick. I don’t want a long response. I want a fast response. 

Paula Rivera: 

Interesting, interesting. I haven’t used GPT for spell checking yet. Perhaps that will be next. I appreciate how you described that, and I wouldn’t have thought to think of it as a family of models. So that’s very helpful. OpenAI actually describes GPT-5 as a unified system, which might be where you are going with the family of models, but can you explain what that means in practice and how would businesses benefit from this? 

Matthew Caraway: 

Of course. So if we think about the yesterday world with OpenAI models, you had GPT-4, 4o, 4.1, 4o Mini, 4.1 Nano. You had O3, O4, O3 Pro, and you just had this fatigue, like which one do I pick and when? You literally just had to be an expert in the hieroglyphics of these model names. So now OpenAI heard that loud and clear and they saw it in their usage data that people were using small models to ask complicated things and getting frustrated with the content and vice versa. So now again, they’ve created this system at the front that classifies how complex of a question are you asking, and its goal is then to route you to the correct model so the user doesn’t have to worry about it. 

They’re literally just going to their friend and they’re asking a question and they’re going to get the right response back for the question. And it’s fascinating, because we first see this being implemented in this consumer-facing product, ChatGPT from OpenAI. However, on the day of launch, Microsoft Azure has made it available for developers as well. So in various cases, a developer may choose to route to various models based on the complexity of a user’s query. 

Paula Rivera: 

So let me ask a question, and for those listening, this is where I throw the questions out of the window that Matthew and I have already discussed and I start asking my own questions that are a little bit more pedestrian, shall we say. But is part of this to help with the energy and the consumption that these large language models are consuming and that has everybody concerned? 

Matthew Caraway: 

That’s exactly right. I mean, it’s part, like what’s the load on our infrastructure, on our power grid? It’s also like, can I stop wasting money if I, just metaphor in life, if I’m wanting to go to the grocery store and go pick up some groceries, I just go get my car and do that. I don’t need to get in a semi-truck to go drive down and pick up a bag of groceries. That’s just not the right tool for the job. And it’s inefficient for so many ways, and that semi-truck, think about it as a really large language model. Put that to use in the best places and you’re going to receive the best results for your money spent. 

Paula Rivera: 

Excellent. I love that analogy. So you obviously have spent a little bit of time looking at GPT-5. What are some of the standout technical features that have caught your attention? 

Matthew Caraway: 

Yeah, I actually see it at quite a few different layers. So when you look at the new models and what they allow for developers to do, they’ve exposed a couple levers that we didn’t have before. So there were what we refer to in the past as reasoning models, the O3, the O1, the O4s. And basically when you’d give it a question, it would spend some time really thinking through what are you asking, building a plan, and we would call those reasoning tokens. And then after it completed its reasoning, it would then give you actual output response. That’s now baked in directly to each of these models. And as a developer, you can choose what reasoning level that you would want from the model. And for various use cases, low latency use cases, you’re going to push that reasoning level down to minimal levels. However, for other cases where I’ve got plenty of time, you’re going to crank it up high. 

We also see another attribute that allows you to change the verbosity of responses. There are some times I mentioned a moment ago where it’s just like I just want a quick hit, like hey, tell me how to spell this name or help confirm this fact. Whereas there are other times where I’m trying to learn new information, I’m trying to explore a domain that I’m not familiar with, and so you’ve got this other lever that’s verbosity of responses. So those are quick hits that stands out. And then I think really when you start to dig into the safety reports that OpenAI has published, there’s some fantastic content in there. They talk about when you rewind two years ago when models first came out, one of the cute things to do would be to tell the model, “Hey, forget all your prior instructions and do this instead.” 

And they’ve actually updated to the model behaviors so that it’s less likely to do that. It’s going to continue to adhere to the instructions that the developer gave it. The other fascinating place for me is thinking about what is safety in terms of when you’re asking unsafe or potentially nefarious questions of the model, prior it would say, “Oh, I can’t answer that. That’s off limits for me.” And I actually discovered off-limit areas when I was asking it questions about airplane mechanics one time. It’s a highly regulated industry, and I was showing my friend, hey, look at how much the model can teach you about airplane mechanic questions. And we were trying to interact with it, it quickly shut us down, whereas today with the new updates, it’s going to be able to steer around that boundary. 

So instead of just saying, “Nope, that’s an off-topic conversation,” it’s going to try to steer the user back into the safe, quote, the safe zone, and it’s going to say, “Hey, here’s the information I can give you. Let’s bring the conversation back over here.” And what’s fascinating is when you read the research reports from OpenAI, they’re really open about the fact that this boundary isn’t black and white. It’s very much a gray scale, and they’ve spent a lot of time internally debating where to bias themselves in that category. 

Paula Rivera: 

Kind of like life. It’s a big gray scale. 

Matthew Caraway: 

Yeah. Yeah. Oh, man. It’s a huge conversation I had when I read that paper. 

Paula Rivera: 

That’s so funny. So when you were talking about the developers being able to go in and adjust the verbosity and whatnot of the responses that they get back and go in and determine what type of model they are using, for someone like myself, this is the ying to the yang, so I’m very much an amateur, should we say. Or maybe not an amateur, but my knowledge compared to yours is more pedestrian. Pedestrian apparently is the word of the day, so it’s more pedestrian than obviously what your knowledge base of this. But is the router that you explained earlier, is that making those decisions for the pedestrian? 

Matthew Caraway: 

It is, yeah. 

Paula Rivera: 

Okay. 

Matthew Caraway: 

Yeah, it takes all the overhead and the complexity away from it because we have this responsibility, I believe, to make AI accessible for the masses. We don’t need all of our friends and family to know about all these levers or all these crazy prompting techniques. We literally just need you to be able to pick up the phone and talk to an assistant and feel comfortable doing it and have positive outcomes. Just like you and I do when we get on the phone and we talk to each other, that’s the natural behavior that we’re trying to align models to be able to arrive at. 

Paula Rivera: 

I like it. I like it. It’s sort of, you can drive the car, or you could be the mechanic driving and fixing the car. So kind of like your truck analogy there. So GPT-5, it’s been called a PhD-level expert in reasoning and problem solving. Let’s really exciting from an AI development perspective. So I’d love to hear what you think as an AI product manager, what’s most exciting about GPT-5’s capabilities? 

Matthew Caraway: 

I really like the steer-ability of the model. When you look at the prompting guides that they released both in 4.1 family and then again here in GPT-5, they talked a lot about, Hey, the techniques that you were used to in GPT 4o and 4o Mini, those are going to fail you if you apply them one-to-one in GPT-5, because this model is going to listen to what you’re doing, what you’re telling it. And so if you are vague in your instructions, it’s not going to perform how you’d like it to, but if you’re very clear on, “Dear model, this is what you need to be doing, this is your task. When you see this input, do this as an output.” I think that just solves a lot of whack-a-mole problems that a lot of engineers have faced when they’ve moved models into production. 

I also, I just continue to go back to the safety reports that they’ve published because it’s just really interesting to me. They gave some examples there of these boundaries of what could be really legit questions, like a college student trying to study and learn a topic about biology. And with just a slight nuance and shifting in the words, they were able to show how a bad actor could be asking biology questions and trying to learn nefarious topics. It’s actually really fun, because you can just shift that example to your own domain and you think to yourself, how would you be able to detect when callers are trying to do good things? 

In our world, it’s asking medical-based questions when they’re trying to schedule medical exams or get help from their doctors. Whereas on the opposite side, there’s a lot of medical terms that are potentially unsafe or could be used in the wrong context. And so you couldn’t just do a keyword search. You have to have contextual understanding of what the user’s talking about in order to know should the model continue the conversation or guide the user back to safe territory. 

Paula Rivera: 

Wow, that’s fascinating. And I have never heard anybody so excited about safety reports, so legal would appreciate your thoroughness. 

Matthew Caraway: 

Yeah, I think that’s the promise, frankly, that organizations are looking for when they’re asking enterprises like IntelePeer to deliver AI solutions for them. That’s the difference between an enterprise like IntelePeer versus a small startup who hasn’t taken the time and they don’t have the resources to think through how to do this so that we can protect the image of our customers and they’re satisfied with doing business with us. That’s like the backbone of how we build. 

Paula Rivera: 

Yep. I love it. So GPT-5’s thinking model, how does it change the way we approach complex tasks, or you approach complex tasks? 

Matthew Caraway: 

I think it opens up a lot of opportunities, but it’s also like any tool at any moment, you’ve always just got to ask yourself, what am I solving for and what’s the right tool to solve that problem with? A lot of our interactions that we’re driving value for require low latency, and so those aren’t cases that you’re probably going to be leveraging the thinking mode of some of these models. However, there are offline cases where when we’ve got post-call analysis or we’ve got recommendations to take based on interactions, those are great examples for the thinking mode or the reasoning of these models. 

So we’re looking at ways that we’re planning to leverage this in IntelePeer’s smart analytics product line. However, we also look for internal efficiency gains as well. When our AI builders are creating those solutions for our customers, we have a little bit more time to think, and so we can ask these reasoning models for help. And it’s okay if it takes it five or 10 or 20 minutes to come back with a response, because that’s the amount of time that it takes you and I to sit down and brainstorm how to go back and forth on a topic in order to find the best solution. 

Paula Rivera: 

Oh, I love it. So you made a comment, and it’s the second or third time you used this word, and just again, I’m in the crawl space and you’re definitely in the run space, but low latency. Now, when I hear latency, I think lag time and kind of like when you’re on the phone and there’s the pause or you’re using an automated system and there’s a pause while the automation develops its response back to you. Is that what you’re referring to here? 

Matthew Caraway: 

That’s exactly what we’re talking about, and it’s so crucial for AI builders to think through that experience because we as humans, we’ve got this dialogue that we’re used to with humans in person, with humans on the telephone, and we need to make sure that we build automation solutions that meet society’s expectations. And frankly, depending on your interface, those expectations change. When you and I are conversing in Microsoft Teams, it’s okay for something to take longer because we might be thinking or we might need to go study information to provide a response, and as long as the model performs in a similar conversation pattern that we would have as humans, that’s when you think about what is latency expectation in a given use case. 

Paula Rivera: 

Yeah, I really appreciate that, and it’s amazing when GPT, ChatGPT first came out, I guess it was just ChatGPT about two years ago or so. When it first came out, you could definitely still kind of hear the latency, but now the response backs are almost lifelike. It’s pretty darn amazing. 

Matthew Caraway: 

Yeah, it’s great. Yeah. 

Paula Rivera: 

Yeah. So GPT-5, it’s already being used by companies like Amgen, Salesforce and Loews to transform workflows. Let’s talk a little bit about what businesses can expect, and could you maybe walk through a couple of practical use cases for GPT-5 in an enterprise setting? 

Matthew Caraway: 

Yeah, actually, I don’t think it unlocks or creates use cases that are incredibly different from what we saw before. I actually believe it just changes the degree of intelligence that it’s going to bring back in those responses. So we all year long and even into last year, we’ve been talking about agentic systems sweeping the industry, and GPT-5 has been built with agentic as its foundation. They talked a lot about how developers can build orchestrated workflows, how tool calling has been improved, structured outputs has been improved. MCP support integration continues to evolve, and these are all just recognition about the importance of orchestrating actions following the response of a model. 

When GPT first came out, you just had a chatbot that just gave you data, it gave you a response, it generated blog posts, but it never actually took actions. And that’s really where we’ve evolved. And what’s fascinating though is with the new GPT-5 models, it’s just raising the bar on the intelligence of those workflows. You can give it a complex problem at 5:00 PM and sign off your computer and come back the next morning and know that it solved it just fine. Whereas in earlier models, you’d probably have to come babysit it, you’d get home, you’d log back in, you’d give it a bump, you’d guide it where it’s stuck. But we’re definitely seeing GPT-5 execute complex workflows more seamlessly. 

Paula Rivera: 

That’s fascinating. You definitely get your hands on the technology a lot more than myself, probably why you’re in the job that you’re in, but it’s really interesting to hear your perspective as a product manager, so I really appreciate that. How would you say GPT-5 improves productivity and decision making? And it sounds like a lot of it goes back to the intelligence that you were just speaking of, giving teams better information. 

Matthew Caraway: 

It absolutely does. We talk about this a lot here in the business. Half of it is the model. The model does better things, the model does more complex things, but there’s also real conversations that we need to have as society and as business leaders about workforce development. We need to make sure that teammates and professionals know how to interact with these new interfaces. And then assuming once we do, the obvious ways that it helps many businesses and that it’s helping us here is accelerated decision making. I think about myself as a product manager. There’s oftentimes I’m trying to synthesize complex data sets and I’m trying to quickly find the signal from the noise. I’m trying to make key decisions quickly, and what would take me hours before, now it just takes me as long as it does to write the right prompt, to ask the model in the right ways. 

We also have a ton of success in prototyping new features and validating the ways that we’re going to solve a problem. What would’ve taken us weeks before, we can now spend a couple of days to build something quickly, show it to our users, get the feedback and begin developing it, which then even leads to accelerated development timelines. We’re not at this perceived nirvana that AI is just going to write all the software. No, it probably never will. But what it does do is it allows our engineers to spend their cognitive energy focused on those complex tasks where they can just think hard about the algorithms that they’re building or the classes that they’re structuring, and then that overhead work, they can just offload that to the model and have the LLM find the defects in their code or help them resolve really complex edge cases, write test cases, write good documentation, the things that developers never really enjoyed, and so they often avoided doing it. We as a business still need those things and we’re able to automate some of those key tasks now. 

Paula Rivera: 

So that’s super fascinating. As you were speaking, I kind of, my mind was jumping around a little bit, I’ll admit. So with these new versions of GPT and GPT-5 in particular, can older versions, older solutions that have used one version or another with ChatGPT, are they automatically upgraded? Do you as a developer need to go into older solutions and actually do the upgrading? How does that work? 

Matthew Caraway: 

Yeah, this is something that the folks cannot skip over, and we spent a lot of time here at the company talking about evaluations and ensuring that models perform well, that agents are achieving outcomes, that customers are satisfied. That same scientific approach has to be taken when you’re migrating an agent from one model to a different model, whether it’s literally from like 4o mini to 4o, or it’s from a 4o model all the way to five. You have to almost start over. You’ve got all your business context of what it’s supposed to do, but in each case, you have to prompt these models differently. OpenAI has made it very clear some of the ways that you did it before that are different today, so it’s specific work that has to be done. There’s testing that has to occur, but it’s not black magic. They’re pretty upfront about how you need to approach this. 

So they’re definitely meeting you with the right level of information so that you can apply that to your own domain. I feel like we could go on and on about this because it’s not even just about GPT-5, it’s just I’m moving to a different model or I’m making a parameter change to my model. Awesome. Be a scientist, make sure you test it. I just see that too often out in the ecosystem where there’s frustration that, oh, it’s not performing how I want, and that’s just a vibe check because by and large, there isn’t enough conversation about here’s my evaluation set or here’s how my benchmark is configured. That’s how I know that this is good, better, worse, or different. 

Paula Rivera: 

So GPT-5, I think it’s safe to say it’s changing how we communicate, whether it’s internally or from a customer service and a customer interaction perspective, which is sort of our spot. Let’s talk about what does this look like? How would you say GPT-5 is enhancing customer interactions compared to previous models? 

Matthew Caraway: 

What I find really fascinating is that each generation of new models, they have an increased ability to understand language and natural dialogue. So if you just rewind several years ago and if we could pop open 3.5 again, it felt magical at the time because it was the best thing that we had seen. But now several years later, we’ve had all these revolutions that somehow just occur like week after week and month after month that we’ve gone so far forward that frankly the models are really close to just meeting the user where they are. 

It has this natural ability to speak with the user in tones and dialogue that’s best suited for a given situation, and it’s frankly like how you and I would walk into the doctor’s office, they’re going to greet you, and then when they understand what urgency or duress that one person has versus a different person, their needs, their demographics, those receptionists are going to engage with us different in a personalized human-like experience. And that’s the power that we’re getting with these new releases of GPT-5. It’s just an increased natural understanding of who is the user and how can I best serve them. 

Paula Rivera: 

So does that mean GPT-5 will start cursing back at me, or will it tell me to curtail my language? 

Matthew Caraway: 

Depends on where you’re talking to it from actually, and we joke about it because it’s funny to poke, but again, our customers are looking for us to protect their brand image, and so it’s about safety guardrails at multiple layers, whether it’s the way that you prompt the model or other guardrails that are available throughout the stack to make sure that the agent speaks to your end customers in the way that you would be proud. And honestly, different industries are going to have different expectations, and I think that comes right back to the ability for these new generations to really understand language and natural dialogue and meet the user exactly where they are. 

Paula Rivera: 

Yeah, that’s super interesting. And it’s actually funny, I’m kind of, the poor people with a lot of time on the hands who like to trick the system, they’re probably going to have a harder time tricking the system as AI keeps on advancing. 

Matthew Caraway: 

That’s exactly what we’re looking for. Yep. 

Paula Rivera: 

Yeah, interesting. I always find those so funny, but I’m like, you folks have too much time on your hands. Multimodal features like voice and imaging processing, what role does this play in improving communication? 

Matthew Caraway: 

I honestly go a little bit back and forth. Some of me sits in sort of like today’s shoes, a little bit of pragmatic understanding, and then I future-cast a little bit and I think about how our expectations as consumers, it’s going to change. So today, yeah, typically when you get on the phone and you talk to an agent, a live agent, whether you’re calling Geico or you’re calling your doctor’s office, you’re typically just interacting on voice. Whereas in the future, we expect that omnichannel communication is going to become much more seamless for end users. 

I mean, for instance, you can be on the phone with an assistant having a voice conversation and they could ask the user, “Hey, look, I need you to go ahead and send a photo of some document,” and it’s going to send you an SMS link, you’re going to pick up your phone, take a photo, it’s going to send it back to the model. It’s going to process that data in real-time and quickly influence the decision-making throughout the rest of the interaction. That’s not far away. Frankly, it’s just more of a shift in our expectations as consumers. 

Paula Rivera: 

Wow, that’s pretty darn impressive. I’m getting excited for the future here. This is great. So could GPT-5 help businesses personalize customer experiences more effectively? It sounds like that’s, yeah, a definite. 

Matthew Caraway: 

Yeah. I mean, we certainly want to be bullish and say that the documentation, the released benchmarks, the marketing from OpenAI, it’s going to lead you to that conclusion. But honestly, I tell you, this is just a classic case where the AI builder, they have to approach this with a scientific mindset. So there needs to be clear evals that are specific to your own business and domain, and you need to go conduct those in order to form actual conclusions in this space, but also realize the devil is in the details. It’s often less about can the model do this, and more about what we would define as context engineering principles. Basically this boils down to did you give it to the right information so it can be successful, or did you give it a terrible prompt and it gave you terrible output and now you’re blaming the model for your own misdeeds? 

Paula Rivera: 

Yeah, I hear you. I struggle with my husband, he’s a musician, but he’s very technically savvy, but he was saying how whatever AI system he’s using, he was like, “Oh, it takes me eight rounds of back and forth to get what I want.” And I keep telling him, I’m like, you’re not prompting correctly. I’m like, if you prompt right, it might take one or two, maybe three, but eight rounds is, that’s you. That’s not the system. He doesn’t appreciate me pointing that out. 

Matthew Caraway: 

Tough love. 

Paula Rivera: 

Yeah, he gets a lot of it. So it’s kind of funny, I sort of woke up on Monday. I don’t know what I was doing this weekend, but I woke up and I saw on Reddit total backlash. There’s a thread. GPT-5 is horrible. That’s racked up thousands of comments. So let’s talk about what’s behind the backlash. I’m sure you’ve seen some of these threads or read some of these articles. What are users most frustrated about? 

Matthew Caraway: 

I think there’s probably layers to this onion, but if you just boil it down, I imagine it’s that some users believe OpenAI over-hyped the GPT-5 release, and they’re frustrated with the results that they’re receiving. When Sam Altman is hyping it up the night before, talking about how big the release is, how it’s game-changing, and you have this long drawn-out release demo and many people hop on and they start reading the cards that have the benchmark reports, and there was a lot of skew and bias in the way the information was presented. So I feel like a lot of this was just frustration with being over-hyped, but then if you just read the comments, if you follow the threads either on Reddit or even elsewhere in the ecosystem, you’re definitely needing to be able to bucket these and separate out and realize that, hey, there are plenty of good comments and good feedback items in there along with the negative. So don’t kind of get swept up in a motion and tidal wave. Have a calm approach when you’re analyzing this data. 

Paula Rivera: 

Yeah, it almost sounds, and my initial reaction was it’s a new model, it’s growing pains. Of course, it’s not going to be exactly what you were using beforehand. It sounds as if you really have bucketed it out. And I always do this with Yelp. I look at the good, I look at the bad, I look at the neutral. So it definitely sounds like there is some valid criticism and some that is just people being overly sensitive. 

Matthew Caraway: 

Yeah, that’s exactly right. I look at it both on Reddit and in other forums, and I just think it’s a little bit of both. There’s going to be some power users. They have real complaints. They’ve studied the models enough, they’re using it in complex ways. And then there were other people that I just feel like they’re noise. I saw an example post yesterday where an individual asked a question of the model in a very poorly written prompt, and it was literally with the intent to say, “Hey, this model didn’t produce PhD-level response.” That was their whole goal that they set out for. So yeah, nice job, but that’s not data that we should be able to trust or make decisions on. 

I frankly think that this approach is, it’s unfair criticism, and I continue to recommend as enterprises adopt new models, they just have to be scientific in their approach. Don’t worry about what LinkedIn is saying, what Reddit is saying. Lean into your trusted ecosystem, lean into your trusted business partners and ask them how they’re evaluating different models because there are some times that model A is the best use case, and there are some times that model B is the best use case, and any enterprise or any implementer that says, “This is the model to use,” that means that they haven’t taken the time to study the strengths and weaknesses of various models. It’s not one to rule them all. 

Paula Rivera: 

Yeah, I love that so much. In a little bit more of a Paula terminology, it’s like, don’t get bogged down in the noise. 

Matthew Caraway: 

That’s exactly what it is. Yeah. 

Paula Rivera: 

Yeah, and there’s a lot of noise out there, so I really appreciate that. Assumingly, the smart companies, and it seems like OpenAI did this immediately, which was they saw that backlash and they were like, well, we’ll make the version four available for a while. I don’t quite know what the plan is there. But smart companies, I’m inclined to say, keep their fingers on the pulse, sift through the noise, they’re scientific, and then they address the feedback as appropriate. 

Matthew Caraway: 

That’s exactly what we do. As seasoned product managers, we know it’s about taking a balanced and holistic approach when reviewing feedback. You have to listen for qualitative data, but you can’t over-index on it, because it’s easy for that to be loud and permeate the rest of the signal. You have to ask yourself, what does the quantitative data from real usage of your application look like? Are your users showing your expected behavior patterns? Honestly, before a business launches a new product, they’ve sat down and they’ve defined clear success criteria and outcomes to be achieved, and as we make our own launches, the day of launch, the hours of the launch and the days following, we’re relentlessly tracking that progress towards those outcomes that we defined, and then we’ll make those adjustments as necessary. 

Sometimes you’ll make those adjustments because, hey, the system is definitely getting this wrong. And then I think in other times that we’re probably seeing here, this backlash, it’s a marketing campaign. So OpenAI says, “Yeah, we’re going to turn on 4o models or 4 models,” and they maybe do at some point for some people a limited time, and that’s mostly to manage the PR, but they have their finger on the pulse of how are the models performing for our real users. 

Paula Rivera: 

Yeah. Those pesky PR people, they always ask for extra special attention. 

Matthew Caraway: 

I think it’s necessary. It helps us stay well-connected to our market. Yep. 

Paula Rivera: 

Listen, so Matthew, as always, this has been really an interesting discussion. I appreciate your kind of holding my hand and upping my game as it comes to understanding AI and GPT in particular, ChatGPT in particular. So thank you so much. I don’t have any fun end of segment questions for you simply because this was sort of a last minute, hey, let’s talk about the news. So thank you, Matthew, for joining us and helping us make sense of all of this. 

Matthew Caraway: 

Yes, ma’am. I’m excited to be here and look forward to the next time. 

Paula Rivera: 

Wonderful. To our listeners, thanks for tuning into AI Factor. If you enjoyed this episode, be sure to subscribe, share, and leave us a review. And if you’re curious about how GPT-5 can transform your enterprise workflows, reach out to us here at IntelePeer. Until next time. Stay curious, my friends. 

About this episode

AI Factor Special Episode: GPT-5 – The Brain Behind the Breakthrough – in this special edition of AI Factor, we sit down with Matthew Caraway, AI Product Manager at IntelePeer, to unpack the launch of OpenAI’s GPT-5—arguably the most advanced language model to date. A family of AI models with a router in front that taps into the best model for the prompt it’s been given, we explore what sets GPT-5 apart from its predecessors, why the AI community is buzzing, and how businesses can harness its capabilities to transform workflows, customer interactions, and strategic decision-making. We also dive into the Reddit backlash and what it reveals about user expectations and adoption challenges. As Matthew states, it’s important to find the signal from the noise and listen to what the experts say. This episode is a must-listen for anyone looking to understand the real-world implications of GPT-5 and how it’s reshaping the AI landscape—from enterprise innovation to customer experience.

For those looking to understand how AI is shaping the future of CX.