
Risk & Resolve
The Risk & Resolve Podcast is your go-to resource for insightful conversations at the intersection of leadership, business ownership, and the insurance industry. Hosted by Ben Conner and Todd Hufford, this podcast dives deep into the challenges and opportunities that leaders face in an ever-changing world.
Each episode features candid discussions with business owners, industry experts, and thought leaders, exploring topics like innovation, risk management, and the strategies that drive success. Whether you’re an entrepreneur, executive, or insurance professional, you’ll gain actionable insights and inspiration to navigate today’s complex business landscape.
Tune in to Risk & Resolve—where leadership meets resilience.
Risk & Resolve
Jason Beutler – NextGen Healthcare Summit 2025 Recording Series
AI isn't magical but mathematical, and understanding how it works removes mystery while helping make informed business decisions. We break down the fundamentals of AI in a way that demystifies its inner workings and explains why it's transformative for businesses.
• Understanding how AI processes a simple request like "write a haiku about puppies" reveals its potential for business applications
• AI converts language into mathematical vectors to find relationships between words and concepts
• Context windows limit how much an AI can "remember" in a conversation, similar to human short-term memory
• Retrieval Augmented Generation (RAG) allows businesses to prioritize their own data when generating AI responses
• AI agents can perceive environments, make decisions, and take actions toward specific goals like searching and organizing data
• New business models like "service as software" are emerging, where AI handles operations behind traditional services
• Four principles for effective AI: treat it like a human, include it in brainstorming, expect continuous improvement, and keep humans in oversight roles
You're listening to Risk and Resolve, and now for your hosts, ben Conner and Todd Hufford. Well, that made me sound really geeky, but I'm wearing a suit, so I can't be that geeky, right? So I really did want to wear jeans and a t-shirt, but you know, it is what it is. So here's what we're going to do today. We are going to talk about how AI works. I want to break it down because here's what we're going to do today. We are going to talk about how AI works. I want to break it down because here's what I believe. Let me first make sure this moves to me. There's me. Why AI? All right. So what are we talking about here? Why do we care about AI? What is it that's actually happening? What is it that's going on under the scenes, under the covers? It's not magic, so let's understand that. So, anyone like math? There's not a lot of hands going up, all right. There's some hands going up, all right. We're going to get into some of the math. Now, I'm not going to get too mathy, but here's what I think. I think that if we understand how things work, it takes the mystery out of it and we can start to make informed decisions about it. So that's where I want to be here today. That's my goal is. I want to take some of the mystery away from it. So we're not going to get into all the calculus and all the details, but I'm going to try and explain this in such a way that you can understand that this isn't magical, that there are some decisions we can make and how this all works. So we're going to talk a little bit about puppy poetry. We're then going to jump into something about. We're going to talk about the tools that exist. We want to make sure we understand all those different tools that exist inside of the AI stack, specifically about how you would leverage that inside of a business. Sound good? All right, let's jump in.
Speaker 1:Why all the hype? All right, who has done this? This is in Claude. Who has gone in and said write me a haiku about puppies. It's kind of the starting point of AI, right, like? You come in and you're like hey, write me a haiku about puppies. It's kind of the starting point of AI, right, like you come in and you're like hey, give me something to like, show me that it can do it. And it did. And we're like woohoo, right, we're all excited, we're thrilled, we're like this is business revolutionary, right? Well, it is, and from a computer science standpoint, I want to break down why and I want to talk through that standpoint.
Speaker 1:So, first of all, what's going on here? This word haiku, what does it mean? Well, we know what it means, right, we know that it's a poetry, we know that it's three lines, we know that each line has a certain number of syllables with it. So we understand that there are some rules around what that word means. But what's really cool here is the computer understood that, and that is something worth taking note of.
Speaker 1:The second thing is I said, hey, here are these set of rules called a haiku, but I want you to do something. I want you to apply those rules to a goal, to this target called puppies. So I'm saying, hey, I want you to take this set of rules three lines, syllables are hard. I want you to figure that out and I want you to goal seek based upon the outcome, the rules and constraints in the plan. I want you to put together a plan and I want you then to execute on that plan towards this outcome. That's pretty cool. Now, it's cool to see it do it for something as simple as a haiku right, that's neat. But imagine if this said I want you to onboard this new employee, or I want you to underwrite this mortgage information. That's why we care, that's why we get excited about it. That's what's going on here is because we actually are at a point where we're able to use natural language to capture the information that we have, to capture the rules about how a business works, and then say here's the goal we want you to accomplish and it can work its way towards it. And there's special ways and there's a way that it does that. And it's pretty logical when you start to understand it. But that's what's going on here. That's why people get excited about it. So when you see people go hey, look, it wrote, you know, look at what it did for my email. Or look at what it did for you know it could write me a poem, that's the neat part. We're tracking. Okay, all right.
Speaker 1:So let's talk about some of the tools. We'll talk a little bit about task automation, because right now, ai and automation they're kind of being talked about as the same thing. So we'll talk a little bit about RPA. I didn't put on their API stuff because that gets all super geeky, but that's really most of what's happening here. If you don't know what API means, it stands for application programming interface and it's kind of like the back door into a lot of systems. It's really kind of the way the systems work. But we'll talk a little bit about RPA. We'll then talk about what knowledge workers are going on. What is embedding, vectorization, llms, context windows you might have heard a lot of this stuff in the marketing stuff that's out on the news. So let's talk about what they are. What's it really mean, how does it actually apply to you and why do you care? And then we'll talk about what learning looks like. And finally we'll jump into this new thing called AI agents, which kind of gets into some of the managerial side.
Speaker 1:Let's start. All right, task automation. Anyone heard of CNRPA before? All right, robotic process automation. I literally wrote this code. You can't see it, which is good, because it's not beautiful code, but I'm going to hit go and what you're going to watch is this is going to log into a website and it's going to upload like 50 orders.
Speaker 1:Okay, notice, this is going fast. That's because the computer's doing it. That's because the computer's doing it completely without a person touching it. So that's called robotic process automation. It's a technology that is kind of the foundation to a lot of computer testing. So software companies will come in and they're like hey, we don't want to pay people manually to go click through a bunch of buttons, so we wrote code to teach a computer how to go do that for us. And then all of a sudden we realize, hey, you know what that's actually kind of useful and there are some applications that we can't get into through databases or APIs. Maybe we can go through the front end the same way a human does, and so that's what robotic process automation is. You've maybe heard of UiPath or Blue Prism Automate, anywhere. Those are some of the new technology, not new, they're old now, it's like a whole three years ago. So that's some of the technology that's been around that's doing a lot of that. But this is getting used a lot with AI and you'll see here at the end on the agent side where they're starting to apply some of that.
Speaker 1:But I want you to know that it's out there. And a lot of people talk about automation and this is one of the examples. I did mention APIs, application programming, interfaces. Frankly, that's how most automation is done, but it's not as fun to show off. So you know, showing code run isn't really that cool. So, all right, let's move into the fun stuff.
Speaker 1:So AI is not magic, it's math. So we're going to talk about it Now. Bear with me, at first this feels heavy and you're like what in the world are you doing? You're like talking crazy math stuff with me. It makes sense. It's not that complicated.
Speaker 1:What we want to do is and we talked about it earlier we want to convert these sentences into ways that we can understand them. All right, this is what's called embeddings. So what AI did and kind of the big breakthrough, is it said hey, let's figure out a way to understand how a sentence is put together so that we can compare it to other sentences. So what we're going to do is we're going to take every word in the sentence in this case let's take the word Italian and we're going to look at all the words that are on the left and then we're going to weight them, basically put mathematical numbers around the relationship between all those words, so that we have some indication of how those words relate to each other. Then we're gonna take all the words on the right, and we're gonna weight those and we're gonna put numbers around them. Okay, finally, we end up with what ends up being a really long list of numbers. So, yes, I went all geeky.
Speaker 1:This is what's called a vector. Remember physics class? Right? Vectors you had to calculate all that stuff. Well, that's basically what we did here, and so, while this is, I'm showing you kind of a simple version, this is done on parameters and billions and billions of different weightings in the way that it all works. So you get a really, really, really complicated list of numbers. But for the purpose of this, let's just worry about two. Let's just put them in a two-dimensional array here, right? So a normal look.
Speaker 1:So let's take our sentence lasagna is Italian food. And now what we're going to do is we're going to go and we're going to load that into our large language model. So we're going to first embed it, okay, so we run through our embedding algorithm that gives us all of our numbers. That creates a vector. We now have this vector, that's out there. We drop that vector on on our platform.
Speaker 1:Then I come along and I say, hey, ravioli is Italian food. Let's do the same thing. Now, it's not quite the same vector, right, it's just a little bit different. So we're still talking about Italian food. We're still talking about ravioli. We're still talking. We're talking about ravioli now instead of lasagna. So we have very similar type of things happening here, but it's not quite the exact same. And then we're going to come in and say, super mario's italian. So it's kind of the same, but it's not right. So it's vector is going to be a little bit different, because we're not talking about food all of a sudden, we're not talking about a video game and and so, like, everything's just gonna be a little bit different, all right. So why does that matter?
Speaker 1:Now let's go load the entire internet, like everything we know about all of mankind, everything that's on the internet. Let's load it all up and let's turn them all into these vectors that are. Now we're gonna get this massive database of all these vectors that are out there. We come along and we say, hey, I need Italian recipes. So what's it gonna do? We do the exact same thing. We literally turn it into a vector and then we say give me all the vectors that are kind of close. So for the math, people, that's cosine, similarity is what they're doing there. But they say go give me all the vectors that are kind of close to what it is that I'm doing or what it is that I've asked you for, and I want you to bring those back. And now I have a set of knowledge I can work with. So see how we got there.
Speaker 1:So the reason I want you to understand how the vectors work is because when you understand that and you understand how it starts to pull its information together, you start to learn how to talk to it. If I say don't think of elephants, did you all think of elephants? So if I'm in a prompt and I say don't do something, what did I just tell it to do? I just told it to pull it in. So that's part of the way that we learn how to talk to it. And that's why they say, when you're writing a prompt is to tell it what you want it to do, not what you don't want it to do. So by understanding how this is being put together, we can start to be intelligent in the way that we look at the way we're going gonna apply all this we tracking, we good, okay, all right.
Speaker 1:So now let's talk a little bit more about large language models. So we're gonna come in here and we're gonna say, hey, I want Italian recipes. And it's gonna go out to our large language model. It's gonna pull in that lasagna was Italian food. It then pulls in that ravioli is Italian food. But it didn't pull in Super Mario, right, because that vector didn't fit. It was outside of it. So Super Mario being Italian doesn't matter. And then it says, hey, here are a few Italian recipes that you can try at home. Sweet. As was mentioned earlier, this is a mathematical probability list. It's basically trying to figure out what you like and what you want, and then it's basically using a random number plus the next most probable word for you and trying to pick that out for you. I really dumbed that down, but that's really kind of what's going on there. So, anyways, again, it's not magic, it's just using probability theory, all right. So we now have here are a few Italian recipes you can do at home.
Speaker 1:Anybody ever been talking to your AI and it suddenly like forgets. You're like what just happened. Like we've been talking about I'm a Notre Dame football fan. So like we've been talking about Notre Dame football and all of a sudden you're bringing up stuff about Purdue. How dare you Like what is going on here. So here's what happens. As soon as I come in and I ask and I give it a different prompt I was thinking more of traditional Italian, like maybe Tuscan, food.
Speaker 1:All of a sudden, this thing called the context comes into play. Everything that's outside of this box here gets forgotten, and it's basically the stuff that's oldest, it's the stuff that came in at the top. So why is that? Essentially, what's going on is we can only pull in so many vectors at a time in order to make a decision and we run out of space. Think of, like having a desk that's got a certain amount of usage on it and you start eventually just running out of places to put things. So what do you do? You start taking the stuff you don't use and you put it on the floor. That's what it's doing. So when you're having a conversation with AI and all of a sudden it goes, I don't know what you're talking about. This is what's happened. Essentially, we've blown out the context.
Speaker 1:Now I want to be careful here, because this makes it look like the context is really small. It's really hard to comprehend the amount of information that we, as humans, hold in our head at any given moment, and the AI context is basically a replication of our short-term memory and the things that we just note. I mean, for instance, I'm talking really fast because that's just who I am and there's lots of words coming out of my mouth. I know all those words and they're all just coming to me really quickly. That's all in my memory somewhere and we're pulling it up really really fast. For AI to replicate that, it basically has to have all that information in its memory. So it doesn't matter the fact that this context right now can literally hold the entire Harry Potter series all seven books in its context and have no trouble. That's still not enough. It's still too little when you think about how we as humans make decisions. It's still going to run out of space.
Speaker 1:So this is an important thing to understand because, as you are, a practical tip that I like to do is I like to start new chats at almost every time, so I summarize what I've been saying and I copy that into a new chat because it gives me the information that I want in a context that I know I have limit, I know I have control around. So if you ever find yourself running out of room or run, or it keeps forgetting things, and you're like. I told you that already. So, if you ever find yourself running out of room or it keeps forgetting things and you're like I told you that already why are you forgetting? It's because it's moving out of the context. So summarize it, create a new chat, and it'll pick up right where it left off. Does that make sense? Okay, so that's what's going on inside the large language model.
Speaker 1:So this is essentially how we're getting our knowledge. We have the automation tools. We now have the knowledge. How do we get our knowledge? Well, if we can put things into vectors and we can then have context, like we're talking to a human that has memory, we now can pull information back and forth and we can have a conversation with our data. Here's a cool thing, though we can do this with what's called unstructured data, and that's also kind of a revolutionary thought. Like all your emails are suddenly now accessible to some computer system in order to make some decisions around, because we can understand the context of the language. We can understand the way the information is being communicated, way the information is being communicated, so this isn't just all the stuff in your database, though. That's important and, trust me, I want all that stuff because that will make better decisions. But it's also all the stuff in your PDFs, all the stuff in your Word documents, all the stuff on your emails. If it's unstructured data, we don't have access to it and we can start to build models around how to communicate and interact on that front. That's pretty cool from a business standpoint, right? All of a sudden, stuff that was locked away in Excel files is now available to us. Make sense, all right.
Speaker 1:So what if we take all of our organizational data and we load it into a local database? We take all of our Word file, all of our documents, all of our emails and we load it up into a database that we control. That's on our premises, that is, in our world. We load it all into there. We call it a vector database, and then we adjust the way that the chatbots work so that when I give it a prompt, the first thing it does is go out to our database and pull information in, and then it goes out to the shared LLM to pull in general data before it generates the response. This is what's referred to as retrieval, augmented generation, and essentially what it's doing is it's saying hey, let's let your data be your data, let's put it in its own space and let's reference it. When we're having a conversation, let's reference that first. When we're having a conversation around how you want or the prompt that you want, let's make sure that we're getting what you want about your data first, but let's not try and load the entire internet into our local database. Let's leverage that too. So we're trying to get the best of both worlds here, make sense.
Speaker 1:So what this lets us do now is I can now load up special information into a central environment, into my environment that I control, and I can then access it inside of a prompt engine. Now, you're not going to, probably. Well, actually, you know what Anyone use the projects in chat, gpt or Claude. You've seen those. That's basically what they're doing here. So when you load up a project on the right side I'm thinking in Claude right now but there's a place where you can load up documents, right, you can load up information. Well, when you're loading up that information, what's it doing? It's vectorizing it. It's embedding it, vectorizing it and loading it into its own little environment, its own little database. Then when you make a prompt, it goes well, let's look there first. So if you haven't done this, we do it with our RoboSource brand.
Speaker 1:So, like, how do you talk? Like, what's the voice of our company sound like? So my wife is the voice of our company and so she has all the documents and all the way that we communicate. She's a much better writer than I am. It sounds really good when she says it. It sounds kind of corny when I say it. So we put all of her examples in place and then I can go in and say hey, I'm trying to write a blog about this, but can you make it sound like the way that your RoboSource would talk? What does it do? It goes to our database, it goes to our local project, pulls in examples about all of the way that she communicates. It then takes my prompt and it goes all right, let's go leverage all the information we have on the Internet as a whole and let's now model that pattern in a way that makes sense.
Speaker 1:And so we now have now I have a way to make it sound like the way that our company should sound, which is awesome. And what do I do with that? I can now take that and I can load it back into my document. We just created a learning loop. Does that make sense? So we now went from I just created something, it's good. I drop it back into my organizational specific data. It gets loaded into my database. It's now organizational knowledge. We're learning, we're getting smarter. The business is starting to pick up information. That's essentially what's going on within the learning on the AI at a simple level for a business, kind of cool huh. That's a new opportunity for us where we can actually apply learning and, as we're learning things, we can actually apply it and continue to feed that in a way that we can keep operating. We can keep apply it and continue to feed that in a way that we can keep operating. We can keep getting better and better as a business in the way that we operate on a day-to-day basis.
Speaker 1:Neat I'm used to people talking to me while I'm doing this, so this is a little bit hard for me. All right, let's see how much time I got here. All right, we are. See how much time I got here. We are doing well.
Speaker 1:Agents anyone heard of them? You seen them online. Most people not heard of them. It looks like All right. So what's going on with agents?
Speaker 1:Agents are essentially a way for a computer to do your work for you in some ways. So I'm going to show an example here. This is using Claude's computer use model, so it's a form of an agent that's actually making some decisions. What you're going to watch it do is it's going to perceive the environment. By that I mean it's going to actually understand the computer desktop. It's going to, essentially, you're going to watch it. It goes really fast, which is why I'm describing it now. You're going to watch it take pictures of the desktop environment, identify where, like, firefox is. Then you'll see it move the mouse down, click Firefox and open it. It's going to actually go through, make decisions, take actions and accomplish a goal.
Speaker 1:So in this case, what I'm saying is I ask it down here. You'll see me type here now say, go get the top 10 mops for sale on Amazon and put it into a CSV file, comma separated, value for file for me, and here are the fields that I want you to get, and I tell it to go. So what does it do? Well, watch, it just took a screenshot and it identified where Firefox was. Oh yeah, it just opened Firefox. It's then going to figure out where this. Oh, we just went to Amazon. It's going to type in the search box mops and we're going to search for mops. Now, this was just a prompt, right? You just said, please do this for me.
Speaker 1:And it basically created a plan. It goal-sought, right? It said oh, this is the goal you want, let's work my way backwards, figure out all the steps that need to happen. I will create each of those steps along the way and I will then go execute on those steps on your behalf here. Notice, my screen just went to dark mode so you can tell what time of day I was doing that. And here we're done. We actually have here's a file. It gives me the output of the file, the top 10. You can't read it. I can read it up here.
Speaker 1:The second, the last one, says Swiffer PowerMop Multi-Surface Mop Kit, which I think is this one over here. So it literally looked at the screen, pulled out the data, found out what it needed to do, all on your behalf, off of one sentence, right? And this is where I usually get questions about Skynet, terminator, things along those lines. So in this case, it's cool, it's not Skynet. But again, let's break down how it did it. Right, we already know it vectorizes it. Right, we know how to create the vectors. We talked about that and, by the way, doing vectors on images is basically the same thing Little nuance to it, but it's basically the same thing. So we create a vector.
Speaker 1:We then are able to create a plan. Well, we know that that can happen because when we ask it to create plans, it goes out and it finds all the things. Because we say create a plan, and it goes out to the world of the internet and says how do you create a plan? And it gets back all that information and it then creates those models for you. So it's doing the same thing here. It's really doing nothing different than what we talked about the first three slides. It's just that we gave it permission to go ahead and do things on our behalf using tools. So why does this matter?
Speaker 1:Let's get into kind of the theory of work then. So management often basically comes in and says, hey, here's our strategic goals, here's what we're trying to set up, here's what we're trying to make happen. Then we're going to go hire knowledge workers and we're going to say we're going to give you a role, we're going to call you chief operating officer and we want to make sure that you hit all of these KPIs effectively. And what does that person do? Well, they set up a set of different goals of what it means in order to accomplish that job effectively. And then, in order to accomplish the job, they then break that down into tasks in order to get it done.
Speaker 1:Traditionally, what happens is, when we start talking about automating a company, we come in and we say, well, hey, let's look at these tasks. Right, let's look at the tasks that are at the lowest level, let's figure out how we can automate those for you, because, at the end of the day, you're still responsible for figuring out what the goal should be in the first place. So you tell me what the goal is and then I'll go and I'll figure out how to execute each of those tasks. Well, where the world is starting to head is a world where the management can now say to an AI agent hey, here's the KPI, and the KPI can figure out, or the agent can figure out what's that goal and what tasks need to happen, and start to execute on those tasks. Now, are we there yet? No, not even close. But you can see we're getting there and you can see the technology is, and you can see that we're going to get there pretty quick. So this is what has people going whoa. This could be revolutionary, because we now can actually because you can see how we can get from basic vectorization of data to I'm doing your entire freaking job.
Speaker 1:Now is it going to replace jobs? It'll replace some. Is it going to replace all of them? No, humans are way too. Ingenuity Is that the word? I don't know. Smart Humans are way too smart to not come up with new things on their own right. We're going to invent new ideas, we're going to invent new thoughts and really, at the end of the day, most of business is relationship anyways, and computers are not going to be great at that. It makes it okay, but it's not great at it. And so you're still going to. Business is going to be done human to human, and that's still going to be the strategic advantage. But a lot of the operational day-to-day is going to start. You're going to start seeing AI agents be able to execute on this.
Speaker 1:Has anyone heard the concept that's come out recently, called service as software? Yeah, has anyone heard the concept that's come out recently, called service as software? Yeah, so this is basically what they're getting at. They're saying, hey, if an AI agent can take what used to be service only offering, say maybe payroll, and we can automate it, can automate it, then can we now go to businesses and offer them this service, but behind the scenes, have the AI run the operations in its entirety and then we'll charge you. Basically, our business model will essentially be for the outcome that we produce. We'll bill you for a successful outcome. That's a different way of thinking about things, but it's getting a lot of traction, specifically in the investing world and on the coasts. Pay for the outcome that you get, pay for the service delivery that is delivered for you. So pay for every payroll run and then back it with AI agents that do a lot of that work for you.
Speaker 1:Now I'm going to put a whole bunch of caveats around things. I think I don't remember what's next. Ah, good, hurdles to adoption. So it's slow. It's getting faster, but it's slow and by that.
Speaker 1:So I kind of lied a little bit. It doesn't actually break things down by words. It breaks things down by what are called tokens, and tokens are parts of words. I just no one really knows how to explain what a token is or when a token is, so I just turned it into words to make it easier to understand. But when you're reading the documentation, they talk about tokens per second or tokens, and so total side note, anyone see the thing about how AI couldn't tell you how many R's were in strawberry? Yeah, so that's because it isn't actually reading words and it isn't actually counting characters. It breaks it into tokens and so, because of the way that it breaks it down into tokens, it saw two tokens that had an R in it, so it said there were two, even though Barry has two R's in it by itself. So that's kind of what's going on there.
Speaker 1:But, as a result, this is still too slow and therefore it feels kind of awkward, especially when you're doing voice stuff. Anyone talked with a voice AI, yet You're like, wow, it can understand me, wow, it gives me information I want. Eventually it's a little slow. There's like these awkward pauses in between every sentence, right, whereas processing what's going on, that's just because it's just not fast enough yet. So that's got to get fixed. Tooling needs to be improved. Context window size still needs to be improved. We talked about that a little bit. We just can't remember enough in order to make really informed, long-term, good decisions. We have to keep breaking them down into smaller pieces, which is impromptu engineering, if you're looking it up. That's chain of thought reasoning it's what they call that, but that's essentially what's going on there.
Speaker 1:The tooling isn't quite ready to just have conversations with you yet. And then usability this is where I don't know about you, but I'm kind of already burned out on chatbots. Like the next person that comes out with a brand new, revolutionary AI thing and puts a chatbot in front of me, I'm going to scream. It's like, yes, I understand the value behind it, but at the same time, we've got to come up with better ways to interact with the system. The models haven't been invented yet. They're still being created. We don't know how to interact with. Like the only intelligent thing we interact with is people and we interact with them through words, so that's the only way we know how to interact with the AI. But we're also on a computer and so there's new models that need to be created around it. We just haven't figured them out yet. So I think that's coming.
Speaker 1:Anyone read Malik's book Co-Intelligence Fascinating book. I think he's out of Wharton. He has four principles that I like, and I like to talk about. The first is that you should treat AI like a human. So when you talk to AI and that's not because they're going to become our AI overlords, but we treat AI like a human. Why? Because AI was trained on data that was created by humans. So if the patterns that AI is replicating are speech patterns of humans, then by talking to it the same way we would talk to a human, we're going to get the better pattern out of it. It's going to recognize those patterns more effectively. So talk to it like it's a human.
Speaker 1:Second rule and I'm doing these off the top of my head, so correct. I'm sorry if I mess them up a little bit but a second rule is to bring AI to the table, because if you've played with it, you know it really solves the blank sheet syndrome problem, where you're like I don't know how to create a marketing strategy for a new product on XYZ. Well, ask it, it'll give you 50 ideas. Half of them might be terrible, but you no longer have a blank sheet. You now have something to go with. So it actually does accelerate the brainstorming and problem solving side. So, yeah, bring AI to the table.
Speaker 1:The third one is that it's the dumbest it's ever going to be. Ai is just getting smarter and smarter and smarter and is going to be continually advancing, so be prepared for that. And then the fourth, and probably my favorite one, is human in the loop. By that they mean don't trust it. My favorite analogy that I say is anyone have a toddler? Okay, sometimes you're blown away by the toddler's ability to manipulate you to get that cookie and you just look at them and you're like that's it. You're going to Harvard, it's happening, you're a genius, it's over. And now, five seconds later, you're screaming don't put your hand on the stove, right. That's kind of AI right now Like it's brilliant and it will blow your mind and then it'll do the dumbest stuff and you're like what just happened? So understand that when you're working with it and that's why we put human in the loop so ask it to do something for you, then have a human review it and make sure that it's all good to go.
Speaker 1:So this is me. I'm a RoboSource. We build automation tools for intelligent companies. I had fun. Thanks for letting me share my geekiness with you, and, leo, I'll be around all day. Let me know if I can help with anything.