AI or Not

E032 - AI or Not - Steve Wilson and Pamela Isom

Pamela Isom Season 1 Episode 32

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

The intersection of artificial intelligence and cybersecurity represents one of today's most critical technological frontiers. In this compelling conversation, Pamela Isom speaks with Steve Wilson, Chief Product Officer at Exabeam and founder of the OWASP Large Language Model security research group, about the urgent security challenges facing organizations adopting AI technologies.

Wilson brings decades of experience to the discussion, having founded an AI company in 1992 before working on the Java programming language and eventually returning to AI following the ChatGPT revolution. His work establishing the OWASP LLM security project has helped countless organizations understand and mitigate the unique risks these powerful new tools present.

The conversation offers practical guidance on combating prompt injection attacks—identified as the number one security threat to LLMs—through implementing a zero-trust architecture specifically designed for AI applications. Wilson explains that LLMs must be treated as "something between a confused deputy and an enemy sleeper agent," requiring rigorous security controls and thoughtful implementation.

Particularly illuminating is Wilson's explanation of hallucinations, comparing LLMs taking "closed book tests" after being asked to memorize the entire internet. His recommended solution, Retrieval Augmented Generation (RAG), transforms these into "open book tests" by providing models with relevant, trusted information before they generate responses—dramatically improving output accuracy.

Despite the legitimate concerns, Wilson remains optimistic about LLMs' transformative potential when deployed thoughtfully. At Exabeam, their AI copilot helps cybersecurity analysts work 2-3 times faster by translating complex technical information into clear, actionable English—demonstrating how organizations can leverage AI's strengths while mitigating risks.

Don't miss this essential conversation for anyone building or implementing AI systems in their organization. The technological transformation happening with generative AI may be the most significant since the World Wide Web—and as Wilson warns, waiting on the sidelines isn't an option for companies that want to remain competitive.

[00:00 Pamela Isom: This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice. Neither health, tax, nor professional nor official statements by their organizations.

[00:41] Guest views may not be those of the host.

[00:46] Hello and welcome to AI or not, the podcast, where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and your digital transformation journey.

[01:03] I am Pamela Isom and I am your podcast host.

[01:07] So we have a special guest with us today, Steve Wilson.

[01:11] Steve is product officer. He's the author of a book that I had the pleasure of reviewing and giving feedback on. He's an advisor and he's a project leader, and there's so much more to him that I'll let him discuss today.

[01:28] But Steve, we have some common interests, particularly when it comes to AI and cybersecurity and securing large language models. In fact, securing that entire AI ecosystem is some things that I know that we both are pursuing.

[01:44] So that said, I thought it would be good for you to come talk to me and let's have a conversation that we can share with the listeners. So thank you for being a guest on this podcast and welcome to AI Or Not.

[01:59] Steve Wilson: Awesome. Thanks for having me, Pamela. And thanks for being a reviewer on the book and it really helped improve that. And I'm excited to talk to you today.

[02:08] Pamela Isom: Absolutely. Will you tell me more about yourself,

[02:12] your career journey,

[02:14] how you got to where you are today,

[02:17] and also I'm a little bit curious about what inspired you to become an author.

[02:24] And then if you can remember, I'll remind you. But what is your connection with owasp?

[02:29] Steve Wilson: Sure.

[02:31] Well, let me. Let me start where I am today and then I'll back up and we'll talk about the journey of how I got here. So I do wear three different hats that we can talk about.

[02:42] And I always joke with friends that my job involves AI and cybersecurity and so do all my hobbies.

[02:52] Before LLM security, I used to play guitar and I used to do martial arts and it seems all I do is large language models and cybersecurity now. So my day job is I am the Chief Product Officer at Exabeam, which is a Gartner Magic Quadrant leader in AI driven security operations.

[03:12] And we use AI technologies to help companies secure their networks and protect their employees and their data, which we can talk about some. The other thing is, last spring I got involved in founding a group basically doing AI open source AI security research.

[03:31] And that's part of the OWASP foundation. And we'll come back and I'll talk about how I got involved with that. And I'm still the project lead on that group, which is doing new cool stuff.

[03:40] And,

[03:41] and the last piece is last month we launched my new book from O'Reilly which is the developer's playbook for large language Model Security Building Secure AI Applications. And, and so I've been having fun going out and, and talking to people about that and getting their feedback on how we,

[04:03] we all think about how to best build secure, robust, responsible applications using some of these new AI technologies.

[04:11] Pamela Isom: Yeah, definitely, keep going. Tell me about owasp.

[04:15] Steve Wilson: Yeah, well,

[04:17] really, really quick. Let's talk about the journey, how I got here and then we'll kind of connect that back up to owasp. But going all the way back to the beginning of my career coming out of college, I wound up founding an AI company in 1992 with a few friends and you know,

[04:37] is as people can either imagine if they're a bit younger or if they're, they're my age, they will remember we didn't have as much compute power and all sorts of things back then.

[04:48] So we built some cool things with AI, we sold them, we made some money. But it was, it was a hard road to hoe back then. And people talk about that kind that time as part of an AI winter where it's.

[05:00] People thought there were great things you could do with AI, but it was really hard and the promise never really seemed to pay off. And around that time somebody invented this thing called the World Wide Web and everybody got distracted by that, including me.

[05:13] And I wound up going off and you know, I was an early member of the team that developed the Java programming language, it's on Microsystems. And I went off and I did developer tools and cloud computing and, and all sorts of things like that for a couple decades before I came back to AI.

[05:30] And you know, sort of after the launch of, of ChatGPT, you know, it sort of changed things for all of us who are in this technology space. And I got quite interested in that.

[05:42] And you know, I was working at a cybersecurity company and I got interested in what the cybersecurity implications of this new technology were. And yeah, last spring, year and a half ago, there was precious little that was written about this.

[05:56] And so I put together this little group at the Open Worldwide Application Security Project, which is owasp, and asked for some volunteers who wanted to work on it. And turned out there was a lot of interest in it.

[06:09] I thought maybe there'd be 10 people who wanted to participate in such a crazy endeavor. And instead I had hundreds of people volunteer in the first week and, and we put out a first version of what we called The OWASP top 10 for large language Models.

[06:22] And it was basically a list of the top things you need to worry about from a security perspective when you're building with, with these new technologies. And I think, you know, if you pay attention, a week doesn't go by that some company isn't in the headlines for something where they deployed some new AI technology and got themselves in,

[06:42] in trouble either. It was put in a position where someone was able to abuse it, steal from it, twist it to their will in some way that people didn't intend.

[06:52] And, and so a lot of that guidance really sort of came, came to be there and then that, that wound up translating into what's. What really the book is about.

[07:01] Pamela Isom: Yeah, so that, that's good. I mean, that's interesting and an exciting background for you and I am glad that you started the initiative. I enjoyed the book. I'm curious to hear about your playbook.

[07:16] So at some point I guess we can talk about that because I'm curious to know what would be what, what do you see essential portions of a developer's playbook, if they're focused on large language models.

[07:33] But wait, don't answer that yet. First of all, I want to know, where do you see us headed from a OWASP and LLM security perspective? What's the future? And this may tie into your playbook?

[07:45] I don't know. But where do you see us headed?

[07:47] Steve Wilson: Yeah, so I would say one of the things about the OWASP top 10 list, and it's, you know, people have been developing OWASP top 10 lists for different technologies for 20 years.

[08:00] There's one for web applications, one for mobile, ours was the first one for AI technologies. But they're fundamentally mostly lists of problems.

[08:11] So here's a big list of 10 things to worry about and there's some information about how you might mitigate them. But the focus is on here's 10 bad things that could happen.

[08:21] And so the question that always comes up is,

[08:25] should I avoid these things, like, is it safe to do anything with this? And people are scared by these technologies. And there's a lot of chief information security officers who are, who still have policies that say, don't use this stuff for doing work.

[08:41] I'm too scared of it. Something bad is going to happen.

[08:44] Pamela Isom: So they have Policies, you say that like ban AI or large language models.

[08:48] Steve Wilson: Or I'd say a year ago that was super common, that most CISOs were defaulting to a stance that says I don't want any of this. And they're gradually figuring out how do they open that up to where people can use them for work?

[09:01] Because otherwise you're just creating more shadow it where people are going to do it, but they're going to do it with no controls. So I think part of the guidance coming out from the OWASP group now is how do you, how do you turn these things into solutions that,

[09:16] that are, that are more practical? And so several months ago we came out with something we called the CISO checklist, which was not a list of problems, but a list of things you should do to be ready to use these new AI technologies.

[09:31] And as a ciso, what are the things you should put in place to do this safely and responsibly, knowing that those risks are out there. And then, and we're just about to announce some additional new documents, we have one coming out that is what we call a solutions guide.

[09:47] Turns out there's a lot of open source projects and even commercial projects now where people are building products to help secure these technologies, building guardrails, frameworks and AI firewalls. And, and so we're, we're basically putting together a place where we're collecting all that information, where people can put it in one place and find out all the tools that are available to help them secure that.

[10:11] One of the other things that, that we've started to get into as well is people are, when you talk about security,

[10:19] cybersecurity and AI, there are like three different angles that this comes from. And what we've talked about here so far mostly is how do I secure my AI technology? The other one that comes up a lot in the headlines these days, which is how are the bad guys, the hackers,

[10:35] the nation state actors, how are they using AI to attack people? And you know, one of the things that has been in the news a little bit and it's even affected my own company, is this concept of things like deep fakes where people are able to, you know, initially deep,

[10:54] when people talked about deep fakes, they were just like the ability to generate a, something that looked like a photograph that was deceptive. And now it's, then it was videos and now it's like live interactive zoom calls where people can be wearing AI based disguises that change their appearance and their voice.

[11:14] And you know, there was a Famous example of that that was in the press a lot where some financial institution, somebody was on a zoom call with their chief financial officer who told them to transfer a bunch of money and they did and he realized it wasn't really his CFO later.

[11:29] And we even had this happen at our own company where I talked to our CISO where he said he had somebody applying for a job on his team that looked really, really well qualified.

[11:40] And when he interviewed him, there was something that was not right. And when they really looked into it, that person was wearing an AI based disguise on the teleconference, they were interviewing for a remote position.

[11:53] And so I think this is out there, really happening now. And so from the OWASP perspective, we're putting out a set of guidance around how do you deal in a world where there are deep fakes?

[12:02] How do you think about that from a security perspective?

[12:05] And then, you know, lastly, on the OWASP front, we're getting ready to, before the end of the year come out with a new version of the top 10 list. And it's been a year since we put out the first one.

[12:15] We've got a lot of updated guidance and some, some new issues that have cropped up from a technical perspective that are important that we want developers and security teams to understand so that some of that update is coming pretty shortly.

[12:28] Pamela Isom: When I, when I think about what you just said is if we think about the emerging threats,

[12:34] you just described some, and I heard you mentioned the evolution of deepfakes. One other question I have is about prompt engineering. So prompt injection is that I know that that's currently a problem.

[12:48] Where do you see that headed?

[12:50] Steve Wilson: It's a great question. I think one of the things that we've seen, it's number one on the top 10 list, is prompt injection is the biggest security risk today to using these technologies that they just don't have a lot of common sense.

[13:06] They're, they're full of a lot of facts and they can do things super fast. And so they're, they're superhuman in many respects, but then in some other respects they're dumber than a two year old.

[13:17] And you just, you need to be really careful about what you trust them with. So there are more and more technologies that are coming out to help with prompt injection.

[13:27] And I would say by and large the models, the new models that are coming out from, whether it's OpenAI or Meta or Google, those base models over time get better at dealing with prompt injection.

[13:39] You need trickier versions of that. But but the hackers are easily keeping up with that right now. And prompt injection is a, is a big problem. I tell people, basically, you need to think about your LLM.

[13:53] If it's inside an application you're building, you know, as a business app, you need to treat it something as in between a confused deputy and an enemy sleeper agent at all times.

[14:06] And that means you assume everything that comes into it is untrusted bad data. And by proxy, you assume everything that comes out of it is likely to be,

[14:17] in effect, untrusted, tarnished data. And it leads you to the idea that you really need a new kind of zero trust framework for building these AI applications. And how do you apply those good concepts around zero trust that were originally around networking?

[14:36] How do you apply them very specifically to building with AI?

[14:40] Pamela Isom: So what does a zero trust architecture look like then in the AI era?

[14:45] Steve Wilson: So one of the things that we do in the early chapters of the book is set up the architecture for a typical large language model application. You know, you've got data coming in from somewhere which is being fed into the LLM and that could be from, you know, if it's a chatbot,

[15:04] it could be from a user typing away, interacting with the chatbot. But more and more there are more exotic ways that that comes in. It can be multimodal, where the data coming in could be text or video or audio.

[15:18] There's also another particular version of prompt injection, which is really tricky, which is what they call indirect prompt injection, which is, I'm not building a chatbot where somebody's talking to it or typing to it.

[15:30] It's processing data.

[15:32] You know, good example is Microsoft Copilot has features where it will read your email for you, and based on that, email help you take certain kinds of actions. Which is an awesome use case and sounds super cool, but they recently found it was vulnerable to people putting prompt injections into the emails.

[15:51] And if somebody sends you an email that's not really intended for you, but intended for your copilot to read, and it embeds some secret instructions, it could get the copilot to accidentally be a confused deputy and exfiltrate data from your email or your Office 365 account and send it somewhere else.

[16:12] And that's where things like these prompt injections really have just continued to be a really big problem that everybody is trying to figure out new solutions for.

[16:22] Pamela Isom: Yeah, we have to be mindful of the fact that we're using this for our convenience. So like the email example, I'm using it for my convenience. So I'm allowing Copilot, for instance, to write emails for me.

[16:37] I don't, because I'm a cyber person. I'm like, yeah, no, you're not writing my emails for me. But anyway, back to the point. So we have it where we leverage it to write emails for us.

[16:50] And now we have to be mindful of how might this become vulnerable. So it's almost like we can't just lean on the CISO and the cybersecurity experts. We have to be mindful of some of these things ourselves.

[17:11] And I think that is one of the emerging mechanisms that we can adopt in order to protect ourselves. I always talk to my clients and my listeners about things we can do to protect ourselves.

[17:33] And the problem is that we don't always want to think about how some something may be manipulated. We want to think about the convenience and we want to think about the good.

[17:45] But if we are dealing with a deputy, you want to be sure that the deputy is handling things the way that you would so you don't just turn people loose and let them run free.

[17:58] You don't you. There's some type of management and monitoring that we have to do. And part of being effective of governors and stewards is understanding what are the potential vulnerabilities, which is what I heard you say.

[18:15] Steve Wilson: So yeah, I think, I think there's a, there's a pure cybersecurity aspect to this where I think it's really important that cyber professionals understand this. But I do think there's a much broader business context, societal context on it.

[18:32] And it's just, it's really different layers of the same problem. And you know, we hear this in debates about how are we going to educate our kids now? And every kid from second grade on up has access to ChatGPT.

[18:44] Now is that a good thing or a bad thing? And how are we going to deal with that? And I think what we've started to see is good schools and forward thinking educators are finding ways to say, look, people are going to use these tools.

[18:57] It's no different than when there were pocket calculators. You know, there was a time when they said you can't use your pocket calculator on the math test because it's cheating.

[19:04] Now it's like, no, everybody has to have a calculator to take the math test because that's not the point. But how do you get it so that these next generation of students who will become the next generation of professionals who are growing up AI native, how do they build that intuition where they know what AI is good at and what is it bad at.

[19:23] And how do you spot the fact that the AI is probably hallucinating when it says something and that you should cross check those facts,

[19:31] building those,

[19:33] just those habits and those skills, whether it's for cybersecurity, where it's very, very important, or just for your broader business use. In those contexts, that intuition becomes really important.

[19:45] Pamela Isom: Yeah, and I'm very optimistic. So I see those as opportunities to keep sharp, to keep our skills up to date.

[19:57] I mean, it's like you may think that it's okay to leave your doors unlocked in your home because you live in a neighborhood where there's hardly any issues.

[20:10] So you learn that there are things that we have to do to be more effective stewards,

[20:20] even though you don't necessarily want to go become a certified cism, for instance. Right. But there are things that we have to do because this is a normal way of life.

[20:29] So basically we have to keep up with the technologies. Right. Keep up with the emergence and use of technologies. And if we do that, I think we're, we're protecting ourselves.

[20:41] So you talked about that. First of all, was there anything else you wanted to share with me about the zero trust architecture and how that looks in this era? Because that was a good discussion there.

[20:50] But is there anything else you want to add to that?

[20:53] Steve Wilson: Yeah, I'll say broadly when we look at this,

[20:57] there are these trust boundaries through the application that include sort of data coming into the application, let's call it, at inference time at runtime, and the data coming out of the LLM.

[21:08] The other piece that's really important in that that architecture is your software supply chain and in particular your AI software supply chain. And you know, all software development or even just it today has these concepts of their software supply chains.

[21:24] And there's been a lot of security disasters there in the last few years. They go by the names of things like SolarWinds, which was a company that was famously hacked, and they gave their hacked software to all of their customers that included Fortune 5 companies and governments and.

[21:40] And then there's open source versions of that. There was something called log 4J, which was a super popular Java library that was embedded in millions of applications, had a really bad security vulnerability in it that someone discovered and then suddenly there were millions of vulnerable apps.

[21:55] Pamela Isom: Yeah, the thing with that is I grew up on. I mean, I launched my career with log 4J. So that was not cool. Like just not cool.

[22:05] Anyway, keep going.

[22:06] Steve Wilson: Oh, but, but I think that's, that's what we're, we're learning as a, as a software industry that these different parts that you're getting are just as important as the code that you're writing.

[22:17] And in the AI piece this is even more important. And so now you have new pieces that you need to think about because you probably have traditional software code, you know, Python code and libraries and you have an operating system just like you did before.

[22:32] But now I have foundation large language model that I'm getting from somewhere and maybe it's an API called OpenAI or maybe it's an open source model that I got from hugging face.

[22:43] And then maybe I'm going to do some fine tuned training to really teach it a specialized skill for what I want to use it for. Maybe I'm in medical or legal and I want to give it some more information or where do I get the training data about that and what risks come with that?

[23:00] That that data could be contaminated one way or another, either intentionally or unintentionally. One could put a backdoor into my application if somebody does that with intention, the others it might create unacceptable bias in the application.

[23:16] And we've seen that with the likes of Google when they put out their new image generators and they accidentally created a lot of bias in it and created things that got them accused, whether they meant to do it or not, of being racist and all these other things.

[23:32] So it's, you know, where are you getting your data? How are you handling it? That's another really important part of that zero trust architecture is you need to think really hard about all the ingredients that are going into your app.

[23:42] Just the same way as the data that's going in and out at runtime.

[23:46] Pamela Isom: Yeah, we call that supply chain risk management. Yeah. And it isn't going away and that's good.

[23:52] I agree with you on that. So that intersection of AI and cybersecurity is so important and there's so many areas that we want to consider.

[24:03] Real quick, what's your take on hallucinations? Like, is there something we can do to,

[24:11] to deal with this? I know, I know. I have a quote from you that says the core reason for hallucination hallucinations lies in the LLMs operational mechanism, which is geared towards pattern matching and statistical extrapolation rather than factual verification.

[24:28] The factual verification is the one that I find hard. Right. How do you,

[24:37] how do we get better at the factual verification sooner in the process rather than later? So we know that one of the contributors to hallucinations is noise, noise in the training data.

[24:49] We know that. But the factual verification can help get rid of the noise. And we can also use tools, I think AI tools to help with the factual verification. But I'd like you to elaborate on that concept of hallucinations and what might we do to help with this,

[25:09] mitigate the risk associated with hallucinations and the factual verification.

[25:14] What do you think this is?

[25:16] Steve Wilson: This is one of my favorite topics and you know, I think, look, it's the thing people find kind of fascinating about this. Some people don't like the term hallucination because it makes you think it's like a person.

[25:28] Other people think like, well, it is kind of like a person.

[25:32] People do this. And so there's the very mathematical part of it, the very computer science part of it where we can explain how noise in the training data turns into something that happens in a bunch of complex matrix, matrix multiplication in a GPU that winds up giving us the wrong answer.

[25:49] But I, I think there's a, there's a metaphor that I've, I've been developing that, that people seem to help think, helps them understand, which is, you know, I got a whiteboard behind me here and if you remember being in school, there was probably a point at which you had to get up in front of the class,

[26:06] in front of the chalkboard or the whiteboard and the teacher would tell you to solve a problem or ask you a question.

[26:13] It's a closed book test.

[26:16] All you've got up here is what's in your head. Well, you've got a couple choices. As a, as a human here, you're, you're trained to basically give the best answer you can and hopefully you studied enough so that you know that answer and it's in your short term memory and you're ready to go.

[26:36] But let's face, we've all been there a lot of time, it's not. So what do you do? You take your best gas, you go back to whatever principles you have and you try to work it forward.

[26:45] And one of the things that sometimes winds up with is some very interesting,

[26:50] sometimes even well reasoned, completely wrong answers. And closed book tests are hard. And when we look at the way that we treat these LLMs like chat GPT, we tell it, go read the Internet,

[27:03] all of it. And then I'm going to give you closed book tests.

[27:07] And what you get, we shouldn't be surprised, is there's a lot of times that those answers are not top of mind for it. So it takes a best guess and that's what you get.

[27:19] So how do you deal with this? The best way to deal with this is where you can. Do not make these tests closed book for your LLM. Let's talk about what LLMs are really great at.

[27:31] They're actually not that great at memorizing facts. It's amazing that they do memorize a certain number of facts through reading the entire Internet. But really what they're great at is they're great at language.

[27:43] That's why large language models, they're good at language, whether it's human language, computer code language, they're good at dealing with language constructs. By extension, they're great at summarizing data, taking complex blocks of data,

[27:58] not the entire Internet, but say, a few chapters from a book, and summarizing that down to something very type and readable and actionable. Those are like the superpowers of LLMs.

[28:11] And so my company at Exabeam, we build cybersecurity products that help people who are looking for bad guys on their networks. And they collect billions of lines from different log files and looking for those traces of where the bad guys are.

[28:30] And we're really good at helping narrow that down to actionable places to look. And that's what the company's been doing for 10 years. But earlier this year, we added what we call our Exabeam copilot, and it's an LLM that we added into the system.

[28:45] And what it does is take a lot of what was previously very technical information and translate that into very simple English that the. That the analyst can read. And so rather than a complicated set of HTML screens with different indicators and risk scores about why they should worry about something particular going on in their network,

[29:07] comes up and says,

[29:09] hey, Pamela logged in from a computer she doesn't log in from, usually in a place she doesn't log in from, and started doing things she doesn't usually do that. I think the following may have happened.

[29:21] You know, she may have had her comp, you know, credentials compromised, or, you know, and I think you should do the following things. And what we've seen is we don't depend on skills that it developed from reading the whole Internet to know what to do in that situation.

[29:38] What we fed it was a few very dense kilobytes of information about a particular situation and asked it to be that translator layer between the super technical information and the human in the middle.

[29:52] And that it doesn't hallucinate very often because we're asking it to work on small data. In effect, we're giving it an open book test. I'm feeding you the 10 reasons I think you should look at this.

[30:04] All I'm asking you to do is translate that into a way that it's easy for a human to understand it and let them have a conversation with you about it.

[30:12] And so what that means from a technical perspective, the way that we do that is a pattern which has become very popular, which is called retrieval augmented generation rag. And that's basically a way that you can go off and fetch some data for the LLM that you think it's going to need for the test you're about to give it,

[30:30] give it that data so that it can take an open book test rather than a closed book test that has been shown to be amazingly powerful. There are other things like fine tuning your model.

[30:44] If you're going to put it into a certain field, medical, legal, you know, you could take the legal textbooks and use that to fine tune the model so that it basically has better intuition about legal issues and better understands that language and it's going to be less likely to hallucinate.

[31:01] So it's really about, if you think about the hallucinations are artifacts of closed book tests. It's how do I best train the model so that it's most likely to be able to answer in a closed book fashion?

[31:14] And then how do I give it access to the most books, the most current information, how do I let it search the database or the Internet when it needs to to get that current factual information?

[31:26] And that's hugely transformative.

[31:29] Pamela Isom: I like that. Those were really good, practical examples without getting into the AI jargon. That was good, right? So I appreciate that. I have a model. I only have one model that I've created for myself today that I personally created.

[31:44] And that model is intended to help me when I provide training and governance. And so I did just that. I fine tune it. I've given it specific materials that I wanted to reference.

[31:55] I really don't want it referencing the World Wide Web. If I the the entire Internet, I don't want it to do that because my training is about what I want it to be about, right?

[32:07] So I know what materials I wanted to speak. I wanted to speak kind of like I speak. So I give it things that I would normally say and responses that I would normally make to things.

[32:18] And then it's learning from that. And it is now you can ask it a question. And before I would think now I wanted to go to for instance, this list of items or I wanted to go to the executive order by the administration.

[32:32] And now I don't have to tell it. Right? So I put those documents out there as a reference point for it. It doesn't go and look and find all these other things.

[32:40] It goes and finds the materials that I wanted to find. But it's up to me to train it properly. It is up to me, and I take full responsibility and to test it as well.

[32:51] So I haven't released it to the public yet. It's interesting because I was talking to a group of students last week and I shared it with them during the training session.

[33:03] Right? Because it's there to help me train. And so I shared it with them and I was teaching them how to create agents.

[33:09] And during that discussion, I said, like, I'm going to release this soon, but I haven't released it yet because I want to make sure that it's vetted like we want it to be.

[33:20] Now we know that that's a continuous process,

[33:22] but I let them know and I have started to use it in controlled settings. So when I'm there, if something goes off, I'm there to say, well, actually, you know, XYZ are exactly right, or, you know, concur with the outputs.

[33:37] So I love the example that you gave. And I don't, so far, haven't experienced hallucinations, but I have experienced, not with my agent, but I have experienced hallucinations when I,

[33:51] based on the type of prompts that I provide, which is what you were speaking to is be careful with those prompts. Right. So that was a really, really good analogy.

[34:01] So thank you for that. So if we consider all of that, you talked about some of the good use cases, but there's a lot of concerns, right? Hallucinations, prompt injections, security, securing the LLMs, having a mindset to be more of a steward of the prompts that you feed it.

[34:21] And so that's a lot of work, but the benefits are good. I also heard you say,

[34:28] be careful about the use cases that you assign the LLMs. Right. What are those use cases? So what are the real advantages to using LLMs? If we got all this going on, what do you see as some real advantages to using them?

[34:44] Steve Wilson: Yeah, I think look outside of personal usage, where I do use it a lot, and I think many of us are in that mode. But,

[34:55] you know, professionally, at Exabeam, we really have based some of our new R and D features this year on it and seen the power of combining that with other types of data and algorithms that we've developed to make this basically make our own product much more easy to use,

[35:17] much more accessible,

[35:19] basically to make it more human.

[35:23] And I, and I think that's where a lot of this comes in. Let's face it, computers have up until recently been basically used for the same purposes for the last 70 years, since we, since we developed computers during World War II and we built them to do two things,

[35:41] which is break German codes and compute artillery trajectories. And, and everything has just been more complicated versions of that. Nvidia GPUs are just doing a trillion artillery trajectory computations every second.

[35:58] It's just a bunch of floating point math.

[36:00] But that's the point is all they were good at was math. And if what you were doing could be codified down into really clean math and that's what the computer code mostly was, then computers were good at it.

[36:13] Instead, it reduced down to being good at language. Computers were terrible at it. You know, when we say, oh yeah, I use computers for writing, that was literally just your word processor was stringing together characters and maybe there was a spell checker.

[36:29] We went through this phase shift where large language models understand language. And that's, that's a first for us. And what we're on is, is a curve that's dramatically accelerating where whether that language is human language.

[36:44] You know, I can interact in English. And funny story, when we developed some of our first new LLM technologies internally, we showed them to our sales team, they got really excited and they said, one of the guys from Japan was on the zoom call and he raised his hand and he said,

[37:01] hey, when can we get that in Japanese for our customers? And the product manager said, I don't know, I gotta go think about that. And he went back to his desk and he said, I wonder.

[37:10] And he just took some English, put it through Google Translate, got the Japanese back, and he put it into the system and it worked.

[37:18] So when you, when you get these, these Frontier Level foundation models, I think we, we found it. It really practically works in about 40 languages with no work from us.

[37:29] It was just free. And so that's, that's dramatic because we're able to go reach new people, we're able to interact with them in new ways. We're able, able to make the interactions more human faster.

[37:41] The concrete feedback coming back from our customers who are hardcore cybersecurity users is they're able to do their job two or three times faster because they have this support from a co pilot.

[37:53] And I think, you know, over the next year we'll Go from helping it interpret their actions to helping them carry out their actions faster. And so what I do think is that just in the very early part of the game that we're in, there's tremendous benefit if you can find the right use cases that match up with what it's good at today.

[38:15] Summarization, language use. Don't put it in the position where you're trying to use it for complex reasoning techniques.

[38:23] Pamela Isom: Court cases.

[38:25] Steve Wilson: Court cases where you haven't,

[38:28] you know, proven out that it's. It's ready for those things. I do think we're in a phase shift where these things are getting better at reasoning. We're getting them better at.

[38:37] At attaching them to data sources. So 12 or 24 months, you asked me about good use cases. I will tell you there are more than there are today. But. But today, if you just use them for what they're good at, they are immensely powerful and transformative.

[38:51] Pamela Isom: Okay. So understand what they're good at, which is predominantly around the language. Anything that involves summarizing materials, which is the use case that I have it for summarizing materials and assisting me with anything that involves converting text.

[39:07] It could be text to video, it could be text to audio. I use it for that. But it does involve the text. So that's why it's called language models.

[39:15] Steve Wilson: Yeah, I mean, I think the other thing to keep in mind is use it when hallucinations aren't a problem. And I think people worry about using it for things that are creative, but, boy, are they great brainstorming tools.

[39:30] You throw some ideas at it and you ask it to come back and be your brainstorming partner. And sometimes those things are really fun and interesting. And the possibility of it hallucinating and making stuff up, that might be a feature rather than a bug in those things.

[39:45] It's a case of understanding what it's good at, what the limitations are and where they work for you and against you. You.

[39:51] Pamela Isom: Yeah. I spent a lot of time sharing with clients who are looking to advance to the next levels of using AI,

[40:04] and particularly generative AI,

[40:07] and those types of use cases and those types of scenarios. And once the light bulb goes off, I mean, they just go. They just run with it. Some of the tools have it integrated more so than others.

[40:21] I know Copilot is pretty much integrated in the team. It's in teams now. It's in all the Microsoft stuff. So some of the project managers are looking to use it more.

[40:34] And it's not as seamlessly integrated in some of the products, but it is in others. Right. So we have some good discussions. And then this always comes up is as we're talking about the risk and mitigating the risk, why would I bother?

[40:50] And so when we tell them, like sharpen your use cases, sharpen your scenarios for things like this, then the light bulbs, you can see the eyes light up. So that's a good thing because it's only going to get better.

[41:04] And the risks, we will learn to contain them.

[41:08] It's a future way of thinking, right? Future forward. Future thinking is that we have to be mindful of these risks as we move forward.

[41:17] So I appreciate you talking to me today. I have one final question for you. Can you share words of wisdom or experiences for our listeners that you want to depart with?

[41:32] Steve Wilson: That's a great question.

[41:33] I don't know how wise I am on these topics, but what I will say in general is this transformation that's happening where the AI is suddenly becoming much more capable.

[41:47] It's a big shift, and we all know that. But having lived through several of these technology transitions, whether it was cloud computing or mobile computing or the invention of the web itself, I think this is definitely the most impactful one, at least since the creation of the web.

[42:06] And I think for those of us who are old enough to remember times before the World Wide Web, it was a pretty big transition, but it didn't happen overnight. It happened over several years.

[42:18] But there were individuals and there were companies who understood it early and aggressively moved to embrace it. And there were companies that, that fought it and said, it's a, it's a weird toy.

[42:31] Nobody's going to buy things online. They're going to come to the store. And you know, there are, there are companies that went from being the funny online bookstore, that's Amazon, and there are companies that were, you know,

[42:45] J.C. penney's and Sears, that were the Fortune 500 companies, their times, who got swept, swept up in this by not keeping up.

[42:53] Pamela Isom: Blockbuster, Blockbuster, blockbuster.

[42:56] Steve Wilson: Perfect example. So what I do tell people is there's a lot of risk in this. And that was true in the early days of the web. People, people got their websites hacked and people lost money and all these things happened.

[43:09] And we had to learn to secure those new digital assets, just like we need to learn to secure our new AI assets and technologies. But, but you can't ignore them and you can't sit back and wait for somebody else to solve those problems.

[43:25] You've got to get in the game. You've got to experiment with these,

[43:29] understand the technologies, understand the risks, find your first use cases,

[43:34] get to it, and get involved with this and, and figure out firsthand. And that's how you'll build this intuition around. How do you make best use of them and how do you make yourself and your business successful?

[43:46] Pamela Isom: Okay, well, that was great. I'm so glad you took the time to conversate with me today,

[43:53] and it's been a pleasure having you on this show,

[43:56] so thank you very much.