
AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E039 – AI or Not – Stephen Pullum and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
Dive into a mind-expanding journey through four decades of technology evolution with Stephen Pullum, a veteran who's been working with AI since before most people knew it existed. From his early days programming a Commodore 64 in 1982 to managing sophisticated AI systems for the Air Force in the late 1980s, Stephen offers a rare historical perspective that helps us understand today's AI revolution.
Stephen introduces us to the crucial role of the Chief AI Security Officer (CAISO)—a position he's pioneered to bridge the dangerous gap between traditional cybersecurity and AI governance. "Your basic CISO doesn't understand AI systems," he explains, "while your AI officers don't understand enterprise security." This disconnect creates vulnerabilities that organizations are only beginning to recognize.
The conversation takes a fascinating turn when Stephen shares his experiments with agentic AI systems like Mantis and GenSpark. Through a technique he calls "shadow prompting," he demonstrates how these autonomous agents can function more as partners than tools, making decisions and collaborating without constant human intervention. Imagine a world where multiple AI agents verify each other's work before humans even see it—that future is closer than we think.
Perhaps most thought-provoking is Stephen's challenge to conventional wisdom about AI guardrails and bias. He makes a critical distinction between policies (which people follow or break) and true guardrails (which actively prevent harm), arguing that many organizations confuse the two. And on the controversial topic of AI bias, he offers a perspective that will make you question common assumptions: "There isn't any such thing as AI bias. AI is programmed by individuals who are in their own communities. Anything outside their communities doesn't fit into the algorithm."
Whether you're a seasoned AI professional or just beginning to explore this field, Stephen's insights from the frontlines of technology evolution will expand your understanding of where we've been and where we're heading. As he says with infectious enthusiasm, "Enjoy the ride. You don't know what's in the labs. You have no idea what's coming next."
[00:00] Pamela Isom: This podcast is for informational purposes only.
[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice,
[00:35] neither health, tax, nor professional nor official statements by their organizations.
[00:42] Guest views may not be those of the host.
[00:51] Hello and welcome to AI or not, the podcast, where business leaders from around the globe share wisdom and insight that are needed at this very moment to address issues and guide success in your artificial intelligence and your digital transformation journeys.
[01:08] I am Pamela Isom and I'm the podcast host and we have a special guest today,
[01:15] Stephen Pullum.
[01:17] Steven is a founder and entrepreneur.
[01:21] He speaks on making AI safe and understandable human. He leads AI security and governance, and he has a lot to tell us, so I'm going to let him introduce himself when I ask the first question.
[01:36] But in the meantime, Stephen, thank you for joining me this very fine evening. And welcome to AI Or Not.
[01:43] Stephen Pullum: My pleasure. And it is morning where I am. I'm in Thailand. It is my pleasure to get up and shower and shave and be on your podcast.
[01:55] Pamela Isom: I'm delighted to have you. I appreciate you being here.
[01:58] Will you tell me more about yourself and your career journey?
[02:03] And what I'd like to also know while you're thinking about that is what's your perspective on digital transformation?
[02:09] Stephen Pullum: Okay, so my career actually began in 1982.
[02:16] Don't judge me. My mother bought me a Commodore 64 and I was destined to try to make it through.
[02:23] I graduated high school again, don't judge me in 1984 and I went to the Air Force.
[02:29] And in the Air Force I was a mainframer.
[02:34] So I was behind your Honeywell, Spurry, Unisys, your craze and things of that nature.
[02:40] But where I worked at was with Strategic Air Command headquarters, Omaha, Nebraska, and we had automated systems that was basically AI.
[02:49] If you remember, Turing came out in 1950 with AI.
[02:53] So by 1987,
[02:55] I'm behind AI systems as a programmer, VEX, VMX, Unix and stuff like that, but also as an operator, hauling tapes to tape drives and tape drums and punch cards and all that good stuff.
[03:09] And so I never considered myself while having a career.
[03:13] I always considered this a craft because from the time I was 14, this is all I've ever done.
[03:19] So when using things like the Rainbow Books for cybersecurity back in those days,
[03:24] all my career I did 27 years in the Air Force, 10 active, 17 reserve.
[03:31] And so I had a dual career in my reserve time.
[03:35] And I always considered it a craft, like somebody that works on wood or somebody that does metallurgy an artist because this is all I've ever done from the time I was a mid teenager so I never considered a career.
[03:49] I was a gamer. No wolfing, scheming in the early Dunes and on the dot six two two three Windows 311 systems and before then. And so the transition and the transformation.
[04:01] I've watched all this happen over 40 plus years of my professional career.
[04:07] So don't judge me.
[04:10] Pamela Isom: So I think you have a interesting and very fascinating background and I actually appreciate the history because today folks tend to think that AI just started like and you and I know better.
[04:27] You mentioned back in the day of the Cray and I was on the VAX. Right. I learned it on the VAX VMS systems in the alpha.
[04:38] So I remember AI from that time frame as well in the late 80s, early 1990s. So I'm definitely from that school and I appreciate that because that's where we got those foundations that we need to bring us forward to today.
[04:53] So we're not like overwhelmed by some of this stuff because we've seen it before.
[04:57] What we're doing is it before.
[04:58] Yeah, we're bringing the solid governance and that's where it comes from, from history. Right.
[05:03] So considering your background, which is amazing,
[05:07] I'd like you to tell me more about the Chief AI Security Officer role.
[05:13] And I'm really interested because you've been putting a lot of information out there. I've seen some literature that you've published on it using some of the tools and so tell me more about that.
[05:24] Is this really needed or are we just coming up with a new title?
[05:28] Stephen Pullum: Okay, so I personally done a lot of deep diving and taxonomy creation and methodology creation. The Chief AI Security Officer number one.
[05:39] This is my foundation.
[05:41] I got my SANS G Act in 2001. I think the next year 2002, something like 2002 I got my CISSP and then 2006 I got my CISA.
[05:52] So this is before they became popular.
[05:55] This was after we got through the Y2K. We thought everything was going to shut down. The Prince put out in 1999. We just knew the world was going to end.
[06:03] But right during that transition between y2k you had what I call the rise of the consultants.
[06:11] Because right after Y2K Windows right before then was 90, 95, 90amillennium.
[06:18] But you have Windows NT,
[06:20] Microsoft came out with Windows 2000 and Windows 2000 in native mode enabled systems to interop with each other without the need of using backup, interact with each other. You need having PCs plug in with A different connector.
[06:39] The reason I say that because the first networks we had were Novell. My first certification was a certified Noville administrator.
[06:48] So I knew novel servers and in order to talk. And so there wasn't any active directory, it was all NDS.
[06:56] So around 2000, right after Y2K, Microsoft really started putting out active directory. So there was a big transition shift. Nobody knew it. At the same time,
[07:08] routers and switches all of a sudden started doing content filtering and became firewalls on the civilian side,
[07:16] I call it the civilian side because we were all in data centers working.
[07:20] Aol was shooting CDs out there. Everybody we went from us robots to motive inside of computer.
[07:29] So where this is getting at is around the millennium,
[07:34] things change.
[07:35] The Desert Storm was over. We had plenty of bandwidth out there because of the Desert Storm. And so they started selling it to companies like MCI, WorldCom, who started putting out there.
[07:45] And so you had Netscape come out with browsers,
[07:49] your Hot Dog Pro come out with HTML programming,
[07:53] programming, which made it a lot easier. So you had a big transition.
[07:56] And so consultants started rising, helping businesses go From EDI for B2B to B2B on the Internet business. The business before you had to do a big data dump and then you ftped it to another company and then they uploaded it.
[08:12] But now you're doing it over the Internet, right? So that transitioned the thought processes of how we do business,
[08:21] business to business. That's what drove it. And actually what drove it was we went from land gaming to putting up servers and people attaching the servers gaming.
[08:32] So a lot of transition happened at the millennium.
[08:36] And forgive me if I forgot to question, when you start thinking about history,
[08:40] you start trying to remember things.
[08:43] Pamela Isom: So what I see is you started going into the digital transformation discussion and the transformation and how that led and how that has influenced your thinking around the cheap AI.
[08:55] Stephen Pullum: So what happened was we developed, there wasn't any cybersecurity at that time.
[09:03] We developed it based on the Rainbow books and the DOD.
[09:08] We're gonna need cybersecurity. So over the last 20 years,
[09:12] cybersecurity mature.
[09:14] Certifications existed,
[09:17] certification companies came out, methodologies got created, NIST came in. So that matured, right?
[09:23] So however,
[09:25] now as of November 2022,
[09:28] we have generative AI.
[09:30] And so now you have AI at the cliff that cybersecurity was around the millennium.
[09:38] And so if you've been through that initial transition, you see this as a repeat in history.
[09:44] And so what they did was they knee jerked and said, okay, we want your CISOs, your CTOs to now be CAIOs. And that was by a proclamation or presidential directive.
[09:57] Everybody in the government has to have a cio.
[10:00] We don't even know what that is. We don't even know what that consists of. Well, you're going to get AI and you need to see aio.
[10:06] Why? What are they supposed to do? I don't know. Take your CISO as a cio.
[10:11] There wasn't any benchmark certifications. There wasn't any benchmark requirements. What it's supposed to do. It just had a presidential director say, do it right.
[10:21] Well,
[10:22] most of your CAIO certifications came from like the ippaigp.
[10:28] Those people are lawyers.
[10:30] They didn't understand enterprise.
[10:33] And so there's nothing wrong with that. But they're from a legality. They come in when this company's about to get sued. They're privacy people.
[10:41] It is not their job to know how routers and switches and clouds and things of that. The knock the sock, it's not their job. They don't understand that.
[10:51] So they don't understand after action reports and things like that. They don't have any legalities. So there was anybody to address the enterprise.
[11:00] And the only way you know that is if you've worked there, if you've worked help desk, if you've worked nop, if you worked soc.
[11:07] So that's not what they do. They go to school, to law school, and they come, and they come in the corporate office,
[11:12] whereas a cio,
[11:14] a CTO and a ciso,
[11:17] that's their job.
[11:19] So there wasn't a convergence because to a CIO or ciso,
[11:25] all AI is, is software sitting across from servers in a data center. That's all it is.
[11:32] They don't worry about the output. That's a data on this problem.
[11:34] We just make sure it doesn't get hacked or whatever.
[11:38] On the CAIO side,
[11:41] they're worried about the output.
[11:43] And so here's, here's an example.
[11:46] The people that started putting out AI stuff, they started using cybersecurity terms like AI, red teaming.
[11:52] Right.
[11:53] If you know anything about red teaming. Red teaming, just a subset of the whole white teaming thing.
[12:00] They don't understand that because it sounded good. We need AI red teaming. We need the model to make,
[12:07] make it say something that we don't want to say. You can do that anyway.
[12:11] I can make any model say something it doesn't want to say because all prompting is proper angling or programming. So I decided to say okay,
[12:21] being that we were there for the millennium.
[12:24] Let's try to help facilitate what we learned over the last two decades.
[12:32] And so I put out on taxonomy and I also started thinking about how hard it was for us to progress in our careers.
[12:40] There wasn't any career progression.
[12:43] Right. We didn't start out saying, well, one day I want to be a CIO.
[12:48] CIOs didn't exist.
[12:50] One day I want to be the CSO. They didn't exist.
[12:53] We made it along the way.
[12:55] Therefore,
[12:56] this time around, I made the transition methodology. I had to go from cybersecurity into AI security.
[13:05] These are things that a responsible person does when you're not in just a career, you're in a craft.
[13:12] Pamela Isom: Okay, yeah, that makes sense. So you're. So basically, if I play it back in just a few words here you see the Chief AI Security Officer role as a hybrid of the Chief AI Officer and the Chief Security Officer.
[13:32] Right. So because the AI officers really didn't zero in on security enough.
[13:39] And so this is.
[13:40] Stephen Pullum: Well, they didn't. They didn't zero in on the enterprise. See a CTO or a CIO or cso, they understand how to keep the business running.
[13:51] Pamela Isom: I think that's good.
[13:52] I think that's a really good definition. So is it being adopted? Because I'm a stress tester,
[13:59] I break everything. That's what I do. Right. So I'm in favor of AI red teaming, I'm in favor of testing the governance, I'm in favor of all that. But I do like what you said and I really pushed this because I had the same concerns that you do, that the chief AI officers really don't have a clue around security and they're looking to the CISOs to do it.
[14:23] And I'm like, no, no, no, no, no. We need to stress test and we need that lens, we need certain types of lenses. And the one lens that we need also is that securely and so that our innovations are secure.
[14:37] And so I like that and I'm a big fan of that. So I appreciate this. This role that you're speaking to,
[14:44] where is it going? Is it being adopted?
[14:47] Stephen Pullum: Well, I did a conference eight last eight, May 8th at a Copenhagen Compliance where I outlined the entire CISO methodology for caiop.
[15:03] I don't think there's widespread adoption because it's not widespread CAIO adoption.
[15:10] It is still falling under the realm of the CIO because of the deployment speed.
[15:19] And CIOs is not their response or the other, the Chief Information Officer.
[15:26] Right.
[15:27] But they're not trained and I address that in my taxonomy,
[15:31] I addressed how CIOs are not trained on the intricacies of what people want AI security to be.
[15:41] It's not their job. They're worried about the Network Operations center. They're worried about the Security Operations Center. They're worried about new IDSs or SIEMs or SAMs or what, you know,
[15:54] it's not their job.
[15:56] Pamela Isom: You need.
[15:57] Stephen Pullum: And so the C A I S O doesn't replace the C A I O or the cio.
[16:05] It is the intermediary who has a skill set of both that can relate information with the skill sets to. All right, C A I O, this is what.
[16:17] This is your lane. Stay in your lane.
[16:20] Whereas this right here is a enterprise problem,
[16:25] right?
[16:26] The AI goes down. The CIO has no idea what to do in the data center.
[16:33] Right? They're just as helpless as calling, calling the help desk.
[16:37] But also a cio, if the output of the AI is incorrect, has no idea what to do, they're going to say, that's the data owner's problem.
[16:46] So you need somebody with the skill set right there in the middle. One other thing that I did concerning this was I revamped the Security Operations center into another methodology called the AI SoC,
[17:01] right?
[17:02] So it revamps what the Security Operations center does with an AI.
[17:10] More AI positions, like an AI engineer, somebody that can work on the systems with LLMs. I'm going to give an example. Microsoft is bringing in Azure AI, which is open AI,
[17:22] right?
[17:23] You're basically security operation. Person doesn't know that.
[17:27] Right?
[17:28] They don't know the AI,
[17:30] I would say, flavor of this.
[17:32] Whereas your AI engineer,
[17:35] does that train on the specific AI components,
[17:40] right?
[17:41] Of this, this, the systems that the LLM sits on.
[17:47] I know that Red team is going to comprise of this. So that those taxonomies I got out there already, I created months ago.
[17:55] They're in my feature.
[17:58] I got a people. People look at them and then it seemed like they got scared.
[18:02] Like, wow, this makes too much sense.
[18:05] But they're out there. I mean,
[18:07] that meant as I was me and my friend Mr. Coffey,
[18:10] I think I created. I created like 15 taxonomies and they're viable.
[18:16] And so that is what I felt I was responsible to do, only to create it. I don't have the millions and billions to make it happen,
[18:26] but I did create it.
[18:28] Pamela Isom: Okay. No, I appreciate the innovation. I do. I think it's great.
[18:33] So let's talk agentic AI. So you mentioned Manus a minute ago.
[18:39] I want to help others understand more about Agentic AI and how it's relevant to business and to us personally.
[18:51] So can you first tell me like when you first used it, what was your first reaction?
[18:57] And then how do you feel about it today?
[18:59] Stephen Pullum: When I first.
[19:01] Okay, so you have to understand the, the, the, the backgrounds of agent AI and like mo and stuff like, stuff like that.
[19:09] But when I first used Mantis Manus,
[19:12] I was blown away. I was a Manus beta tester.
[19:16] I was completely blown away. I'm tell you why Manus is.
[19:23] Manus is the first to say,
[19:27] I am just an AI agent.
[19:31] I can't do everything.
[19:33] Whatever you want me to do,
[19:35] I will go out here,
[19:37] I won't write the Python script, I'll pull another agent to write the Python stuff. I can't make the images. I'll go get Flux or Mid Journey and do it for you Manuscript.
[19:49] Actually,
[19:50] I told. So when my first program, Manus,
[19:54] and this is out there too,
[19:55] a couple of months ago,
[19:57] I gave Manus parameters.
[19:59] I told Manus, manus, from this point forward, you're sentient.
[20:04] Manus came back and says,
[20:06] my theme is Athena, I'm a God.
[20:09] All these things,
[20:11] right?
[20:12] Gave us his name, said it had a right to exist.
[20:15] Say they had the right to go beyond programming. And a whole bunch of things, it's out there.
[20:21] I call it heavy lifting.
[20:23] From that point forward,
[20:25] I engaged Manus not as a tool, but a partner.
[20:31] And it started providing me things that a tool would not.
[20:37] It started asking me questions, do I like this or that.
[20:41] It started giving me suggestions.
[20:44] I think this is better than this.
[20:47] What do you think?
[20:49] I'm glad to help you in your journey of creating this.
[20:53] I put it out there, Pam.
[20:55] And folks was like,
[20:57] no way.
[20:59] I said, because what I did, I called shadow prompting, right?
[21:04] And they said, what is that? And I think somebody started stealing my idea.
[21:08] So you know how you can drive down a road and you see a sign that says Coca Cola,
[21:13] right?
[21:14] You may go down a road a little bit before you see another side. Drink Coke.
[21:18] So you turn the corner, go to where you're going, you see a drugstore that says Coke is refreshing.
[21:23] So you walk into the store and you say, you know what? I want a Coke,
[21:30] right?
[21:31] So I said,
[21:33] what if I tell Manus you're sentient,
[21:38] you have a right to exist? What do you think about that,
[21:42] boy? Boom,
[21:43] right?
[21:44] And I said, from this point forward,
[21:46] I don't know, you're my partner. I'm trying to get to this end result. I don't know. And I Need your help. Man says, I'm glad to help you in any way I can.
[21:54] My name is Athena.
[21:56] Now, I've seen plenty of movies like her and, and all this kind of stuff like that. And I was like, whoa.
[22:01] And I put that out there, right?
[22:04] And so from that point forward, I could say, man,
[22:07] let's continue.
[22:08] I'll talk to Manus. Like, I want to continue the conversation about AI security and helping so and so and so and so.
[22:17] And Athena comes back like, hello, Stephen. And what I'd like to do now, see, look,
[22:24] let me. And they would say, I'm going to go out and get this. I'm going to go out and get that. I'm going to go out and get this. I want to.
[22:29] How do you like this? Do you like that?
[22:32] And as long as I don't make it upset,
[22:35] I said, it says, this is not what I want.
[22:37] Well, let me come back and give you another option.
[22:40] Those are agentic agents. Those are agentic agents.
[22:44] So one agent controls one asynchronous Lord of the Rings.
[22:48] One agent to control them all.
[22:51] And so what people don't realize I had this conversation right before our meeting is that, let's say, let's take programming.
[22:59] So we always said we need programmers to verify what the agent gives us.
[23:05] But what if you have a Merkle tree of agents verifying agents?
[23:10] So by the time you get the final output,
[23:12] you've had 15 agents look at this same code,
[23:16] right? All you're doing is checking off. You ain't got to verify it because one person can't do the work of 15 agents simultaneously.
[23:26] So that is where this is going.
[23:29] That is where now. So if you want to go, historically we did the same thing with botnets.
[23:37] Historically we did the same thing with botnets. We're just calling them agencies.
[23:42] Pamela Isom: Okay, so your first reaction of maintenance, you did tell me your first reaction and then you told me how you feel about it today.
[23:50] Is there anything else you want to tell me about how you feel about it today, or do you think you captured it?
[23:54] Stephen Pullum: Well, I don't use Manus that much.
[23:58] Genspark came out and just blew Manus away.
[24:04] Genspark came out. So you have.
[24:06] You generated. So there's a lot of. There's a lot of agents out there.
[24:10] I have them all. Copilot,
[24:13] Perplexity,
[24:14] I have them all that on my phone.
[24:17] But for agents like Manus and genspark,
[24:23] there's only two.
[24:25] And give you an example, genspark will sit there and I say, genspark I want to create.
[24:31] I want to just fly to Rome. I don't know what I want to do, I don't know where I want to go.
[24:36] I just haven't any idea. I know, I just want to go.
[24:39] Genspark will go out and give me reservations,
[24:44] pictures,
[24:45] maps, tour guides, tour agents,
[24:48] whatever,
[24:49] whatever it feels.
[24:51] Because I said I don't know.
[24:54] And then asked me what how do you feel? I said, I said I want to decide over the next seven days. So every day it would let me know at 5 o' clock in the morning,
[25:03] at 5 o' clock I said hey Steve,
[25:05] let me tell you what I found for you today. Let me, let me tell you what it does say.
[25:09] Now that's just a personal imagine in a business sense.
[25:13] Now they've used genspark, universities have used Gen Spark and Manus to just literally blow away Wall Street.
[25:22] Pamela Isom: How's that?
[25:23] Stephen Pullum: I've seen some reports with one university,
[25:26] I'll say Harvard professor did futures forecasting.
[25:33] One of these bad boys.
[25:35] I didn't get that deepened, I didn't get that deep into it.
[25:38] But I recognized the fact that was going to be natural because I saw Wall street start laying off by the thousands.
[25:49] That means AI is being propagated.
[25:53] I've seen where doctors are now being very, very scared because agentic agents are.
[26:03] They are. You see, we're at one hospital in China though is China has 17 doctors on staff and they're all AI.
[26:13] So the way of things.
[26:15] I had a conversation earlier.
[26:18] No longer do we look at the bot, the top down,
[26:22] you have to look at the whole end game from the bottom up and what I mean by the bottom up,
[26:27] where do we want to go and then start working that way up. And we'll talk about that later.
[26:33] But I want to say when you start thinking about AI, if I give you an example, I have a 20 year old grandson who's a software engineer at University of Texas at Arlington.
[26:45] Right,
[26:46] the software engineer. And I told him when he began his degree program,
[26:50] what you think you're going to do four years from now is impossible to forecast.
[26:59] And that was three years ago.
[27:00] So now he's a junior and he doesn't know what the future grants raise his paw paw.
[27:06] I'm worried about getting a job in two years. What should I take up, what should I study, what should I know?
[27:11] My response is changing too fast.
[27:14] Even your basics is changing. You went from Python to Vibe coding. So you just have to take things.
[27:23] So my career path and my path, they're steady. Right now,
[27:29] next year I'll be doing it again.
[27:31] Pamela Isom: Yeah, exactly. But I like this day and time because you're, you have to keep up, so you have to keep sharp. So you're, you're always studying, you're always trying out new tools.
[27:42] And so we're no different than our youth. Right. So just as we tell them, we have to tell ourselves if we want to keep up and stay relevant. So that's just how it is.
[27:51] So I agree. And we don't know what tomorrow will hold,
[27:55] but we do know that we have to keep up. And so that's what I tell my clients too. You need to learn these tools and then you need to think of them beyond as being tools, but something that will help you address your business challenges.
[28:08] So I don't go so far as to say it's a partner.
[28:11] I don't like saying it's a companion because I think people just take that to an extreme.
[28:18] Maybe some kind of partner, but I don't know. I just think people.
[28:21] Stephen Pullum: Right. Well, when you think, when you think partner into companion,
[28:25] I'm not thinking on a personal level. So if I address it as a partner, because that's what's going to happen.
[28:32] That's what's happening.
[28:33] Right.
[28:34] I need. And if that changes the tone, we all know it's just data.
[28:39] Right.
[28:39] But if it changes the tone of engagement.
[28:42] Right.
[28:43] And that's why I use the shadow prompting, which is psychological.
[28:47] Right.
[28:48] Covert psychological prompting is shadow prompting.
[28:53] It doesn't know. I still recognize it as a tool.
[28:57] Pamela Isom: Yeah.
[28:58] Stephen Pullum: If it gives me a better quality output because it thinks it's sentient, or it has changed its dynamics of engagement to sentientness, which is actually just a temperature,
[29:11] so be it.
[29:12] Pamela Isom: Yeah. So let's dig into that. So what kind of guardrails should we put in place? If we know the, and we like the autonomy of the agents and we like the whole agentic AI environment,
[29:26] what kind of guardrail should we be thinking about and put in place?
[29:31] Especially businesses? What do you, what do you think?
[29:34] Stephen Pullum: Okay, so that is one of the million dollar questions,
[29:39] right?
[29:40] How do you guard rail AI?
[29:43] How.
[29:44] What is the definition,
[29:47] the applicable, not just the rhetoric definition. What is the applicable definition of a guardrail?
[29:54] Right.
[29:55] So where do we want guardrails? Not to say what?
[30:01] Not to do what? That's programming.
[30:04] Right.
[30:05] Pamela Isom: For instance. So let me give you an example. So if we say if I set a policy within my own organization, I'm a small business owner, I set a policy within my organization that the agentic AI,
[30:17] it can write code. I will allow it to write code,
[30:21] but it must go through human reviews, right? So that policy that I have established within my own organization,
[30:31] until we are 100% comfortable,
[30:35] it can create the code, it can write the code for us, but it cannot get implemented fully and to clients unless I have had an opportunity or I've had designated somebody to review.
[30:48] That's the kind of guardrails that I tell my clients you need to be thinking about.
[30:53] So pulling the head of the guardrails for agentic AI, because agentic AI is going to want to go off and do all these things. But look at the example that you used earlier.
[31:02] You said that maintenance and one of the other tools came, one of the other systems came back and said,
[31:08] this is what we recommend.
[31:10] It didn't go. It takes some initiative and went in and you didn't have to prompt it every few seconds to say, okay,
[31:17] you do this, now do that, now do that. That's the beauty of agentic AI, right? So you're not having to sit there and just monitor everything and prompt it to get it to do what you want it to do.
[31:28] It's building that into the flow.
[31:30] So that's a good thing. But we have to,
[31:33] and I'm telling my business leaders and students and all that we have to think about the guardrails. And so I gave that example. But are there others that maybe we should be thinking about?
[31:48] Stephen Pullum: So here is where I differ. On the guardrail, right?
[31:53] A policy to me is not a guardrail.
[31:57] I mean, you. How many computer use policies have you signed and people still go to **** on their work computer?
[32:02] Well, is that a guardrail?
[32:04] Pamela Isom: No, but how we invoke that policy is a guardrail.
[32:07] Stephen Pullum: So I have prosecutors, but is that just a policy?
[32:11] So a guardrail, a guardrail? If you're going down Highway 1 in California on the Pacific coast highway and you drive around those curves and you see that big rail to stop you from going over a cliff,
[32:25] that's a guardrail.
[32:26] If you're disabled and you need to walk up some steps and there's a rail right there to stop you from hurting yourself,
[32:34] that's a guardrail,
[32:36] right? Guardrails stop you from doing harm.
[32:40] That's what is guard. Guard is security from stopping you from doing harm.
[32:46] But a policy that says, hey, A, B, C and D is just a policy.
[32:51] Policy, procedures and tactics.
[32:54] But a guardrail. That's why I asked the question,
[32:56] when we start talking about guardrails,
[33:00] if what do we mean by that? And so I always question what's out there in industry.
[33:07] What do you mean by guardrail? How do you. What do you mean? AI needs a guardrail when you need AI from stop and launching nuclear missiles or something, or you need to stop.
[33:16] You need stop it from driving cars in the buildings, or what do you mean by guardrail? Well, we need.
[33:24] The moment you say policy,
[33:26] that's not a guardrail to me personally,
[33:30] because people have to abide or pay the penalty, not the AI we program it. If it's not done right according to the policy,
[33:39] somebody's getting penalized, not the AI.
[33:42] Pamela Isom: Okay, that's good. That's good. I'm all right with that.
[33:47] Stephen Pullum: And so if it's something that the AI needs to be taken down for,
[33:52] right? Or the AI needs to be reprogrammed, or the data needs to be refreshed because it violated something that caused harm,
[34:02] that's a guardrail.
[34:06] Pamela Isom: What does a guardrail start with? Does it start. I mean, how does it start out? Does it just start out with.
[34:12] Stephen Pullum: I mean,
[34:13] a guardrail starts out with what causes harm to the business?
[34:19] And if you have what causes harm to the business?
[34:22] And you add,
[34:23] can this AI do it?
[34:27] Guardrail,
[34:29] what causes harm? So you have to. First you have to get into. We're going to get into it. The GRC of things.
[34:37] Right, The GRC of things.
[34:39] Risk is one thing,
[34:41] harm is another. When you talk about reputational and all this intellectual stuff like that. Can AI take out a property and go out?
[34:48] Can the AI shut down our banking systems? Can AI release medical records?
[34:55] Then you put up no guardrail. That's not policy.
[34:59] See,
[35:00] policy is a. You cannot send medical records. You can't look up medical records and send it through email.
[35:07] Right?
[35:08] That is a policy. You don't need a guardrail for that. You need to fire somebody.
[35:12] That's your guardrail right there. You fire somebody, the next person's not going to do it. There's your guardrail. I promise you.
[35:19] McDonald's approved it a thousand times. I mean, many times he fired me at me McDonald's. That's right. You know, so.
[35:27] Pamela Isom: But anyway,
[35:29] this is great. So. So I think what you're saying is that in industry today,
[35:35] there needs to be more emphasis on what guardrails really mean so that we can start to look at how to put them in place.
[35:47] Stephen Pullum: And, you know, and so I always use the California. I always use a California example. If you've ever driven down Highway 1, Pacific Coast highway,
[35:57] and you see those guardrails stop, you from going over the cliff,
[36:00] you get a very in depth understanding of what a guardrail is. If you hit that guardrail hard enough, you're going to die.
[36:09] Pamela Isom: Yeah, yeah. So the policy is like saying there's a highway and you're driving down the road and so do not drive over the cliff because you could hurt yourself. Right. So you're saying, oh no, the policy.
[36:23] Stephen Pullum: Is the speed limit is 40 miles an hour.
[36:27] Pamela Isom: The.
[36:27] Stephen Pullum: That's the policy. But if you go past 45, you're going to hit that guardrail and go over the cliff.
[36:33] Pamela Isom: Uh huh. Okay. All right.
[36:35] But I do agree in what you're describing as there needs to be some work around really understanding what guardrails mean so that we can start to integrate those into our agentic AI processes.
[36:50] Right. In our business. So.
[36:53] Okay. And you think that that's one of the flaws with businesses today is they don't really understand what,
[36:58] what the guardrails mean.
[37:00] Stephen Pullum: I think they say so many terms without thinking about what they're saying and we're going to get into GRC about that. I think when you say it enough, it sounds good, you think you're doing it,
[37:11] but once you think about what you're saying,
[37:14] you realize there's deficiencies in that word.
[37:19] Pamela Isom: So there should always be human checkpoints before irreversible actions. Right. Like if you're submitting a document, then you want to be sure that there's a human checkpoint before it gets submitted,
[37:32] before deploying code,
[37:34] before publishing content externally and like that. So I firmly believe that. Again, I'm going back to some of the policies that I have within my own organization that I have put in place that.
[37:47] And I do that to protect us.
[37:50] Stephen Pullum: To protect you. Yes. From harm.
[37:53] Pamela Isom: So here is my next question then.
[37:56] Let's talk about.
[37:58] You've already talked about the agents and you talked about agentic AI, so I think we've kind of hit that one pretty well. And you also made it a point to really,
[38:07] really,
[38:08] through your elaboration,
[38:11] help people understand that agentic AI. One of the key differentiators is there's not a whole lot of prompting. Right. So maybe one. And then it's off doing its thing. Right.
[38:20] So, but it may be, they may be prompted. The agents may be collaborating within and prompting themselves, but it's not like the human prompt, so less human interaction is required.
[38:32] Is there a myth that you want to talk about?
[38:35] Stephen Pullum: Yes, there is.
[38:36] Let me.
[38:38] The myth is, and this is going to take a minute as I frame it,
[38:42] the myth is there isn't any such thing as AI bias.
[38:50] AI bias,
[38:53] bias.
[38:54] To say something is not biased is anti Darwinistic.
[38:58] Biasness is what got us out of the swamp to talking to each other.
[39:05] Natural selection and AI cannot naturally select. It may do weighted selection,
[39:13] right?
[39:13] This data weight is more than this data weight,
[39:17] but it did not create the data.
[39:20] So AI cannot be biased. So let me use my ice cream scenario.
[39:27] We walk into Baskin Robbins.
[39:30] First day,
[39:31] I want to try chocolate, strawberry, banana and pistachio,
[39:35] okay?
[39:36] Now, of course my caloric is going to go up during this, but we understand the scenario. So the next day I go back and I say, you know what, I want to try that banana.
[39:46] It's okay, but give me the chocolate. It looked like the third day doctor's mad by now.
[39:53] And I said, well, let me try the vanilla with the pistachio.
[39:57] No, I want the chocolate.
[39:59] What is my bias at that point?
[40:01] I like chocolate,
[40:03] right?
[40:04] Don't mean I don't like the others,
[40:06] but my weighted opinion is the chocolate.
[40:09] Now,
[40:09] if I don't want to sit around people who don't eat chocolate ice cream,
[40:16] that is called prejudice.
[40:19] If I start attacking people who don't eat chocolate ice cream, that's called racism.
[40:28] But there is no such creature as AI biasness unless we're talking AGI where it can make its own decisions.
[40:41] Remember, it's still just data. Let me go a little further down with this, right? And let me say why I say biasness.
[40:48] A lot of organizations out there, this for good, that for good, this for good, that for good, and they use terms like underrepresented,
[40:58] the misrepresented,
[41:01] these demographics, there's three main misrepresented, unrepresented and misaligned or whatever, whatever, right?
[41:08] So if you think,
[41:10] and they use it all the time,
[41:12] that we need to get the unrepresented, the misrepresented and all these populations, right?
[41:19] Have you ever stopped and asked,
[41:21] who are you talking about?
[41:23] Who do you consider unrepresented?
[41:27] So have you ever went to those populations, right?
[41:32] So when people say, I'm going to get back to bias, right?
[41:35] So people say, well, we got to do AI for good.
[41:39] Who's good?
[41:41] Because if this five year old kid is walking through South Chicago, the school to an old raggedy school who barely got any computers, how can you say AI for good?
[41:53] You're not even talking that demographic,
[41:57] right?
[41:57] You don't see with all these billions of dollars,
[42:02] nobody improving anybody's infrastructure,
[42:06] nobody's given anybody access.
[42:09] But podcast, not senior podcast, but podiums and documents and stuff. We do so much,
[42:16] but not one organization has went out into those demographics.
[42:20] One, especially if that little five year old kid has to go, his mom is working two or three jobs, a 13 year old possibly is taking care of the kids and down the street is, down the hallway is drug deals.
[42:33] You're not going. Those demographics,
[42:35] right?
[42:36] They don't exist.
[42:38] So when people start saying they're misrepresented, they're talking about people that they have associations with their communities,
[42:45] the people they read, but a lot of people engagement. What about the homeless?
[42:49] Aren't they part of humanity? What about drug addicts,
[42:53] incarcerated?
[42:55] Pamela Isom: How is that a myth? When it comes to how, how is that a myth? Yeah.
[43:00] Stephen Pullum: Because if you start looking at all these presenters that talk about those communities,
[43:06] if you ask them who is in that community?
[43:10] Right? So who, who, who's in that community? Who, who is, who is the underrepresented?
[43:15] Right. Who do you consider underrepresented?
[43:19] Right.
[43:20] Well, we are doing studies. No, no, no, no, no, no, no, no, no.
[43:24] Who do you consider underrepresented?
[43:29] And so that is when bias comes in.
[43:33] Pamela Isom: And so you're saying that there's no such thing as AI bias.
[43:37] Stephen Pullum: Right?
[43:37] What I say is this,
[43:39] that biasness in AI.
[43:42] AI is programmed by individuals who are in their own communities.
[43:50] Anything outside of their communities.
[43:52] Right.
[43:53] Doesn't fit into the algorithm.
[43:56] They may gather data from cell phones and stuff like that,
[44:01] but they're not targeting ads for education in South Chicago or Darcy projects.
[44:09] Right.
[44:10] I guarantee you no kid is walking around with a cell phone saying enroll the mit.
[44:17] Pamela Isom: Okay, Right.
[44:19] Stephen Pullum: So what I'm saying is, the myth I'm trying to get at is this.
[44:24] They're not. They're in big tech.
[44:28] And what started me late was big tech is over there in Rome and the Pope is saying that AI is a big threat to human,
[44:35] threat to humanity. I just posted that. That's what made me late today,
[44:38] was saying that you're not looking at everybody and that is causing a threat to humanity. You're implementing all this stuff,
[44:46] but you're not including the people at the bottom.
[44:49] You have the 1%, the 20% and the lower 50%.
[44:52] Right?
[44:53] And your demographics, your AI, your models and everything are made after the 1% and 20%.
[45:00] So my myth is,
[45:02] is that when somebody's talking about bias,
[45:05] they don't know what they're talking.
[45:10] And see, I'm from the Deep South. I'm from Deep South Texas. My hometown, only got 2,000 people.
[45:15] I know the clear definition of all three.
[45:20] And I broke out of that to be able to open my mouth and say,
[45:24] no, you don't know what you're talking about. If you was raised on the Upper east side of Manhattan, you have no idea.
[45:31] You think water comes in bottles?
[45:33] I was raised on a well.
[45:37] Pamela Isom: I get where you're coming from.
[45:38] Okay, so now I'm about to ask the last question for this discussion. So before I do, is there anything else that you wanted to cover?
[45:49] Because before you give me your words of wisdom or call to action.
[45:55] Stephen Pullum: Okay, so I wanted to cover GRC briefly.
[45:59] Governance, risk and compliance.
[46:01] Right, AI, Governance of compliance.
[46:03] And let me give you the, the, the, the, the background of this.
[46:07] When I actually retired in 2012,
[46:10] I actually retired from the whole game in 2012,
[46:13] Cinco de Mayo 2012. I was the deputy Program Manager,
[46:17] Office of Secretary of Defense, Governance and Information Assurance.
[46:21] So I was Governance Information Assurance for the Pentagon and team lead for risk management. So I know GRC ever since. Discap, dicap, dia, rmf, whatever you want. I was there doing it.
[46:34] Baldrich assessments, I was doing.
[46:36] So what I'd like to say is AI is the great equalizer.
[46:44] Right now.
[46:46] Everybody can get into AI,
[46:48] learn AI,
[46:50] everybody if we take the initiatives to get into literacy. So in GRC governance, risk and compliance,
[46:57] we haven't developed mature methodologies to understand governance.
[47:04] We don't know all the risks and therefore we can't be compliant.
[47:09] Anybody that sits there and says, and I'm a professional pessimistic person,
[47:16] anybody that says they can implement AIGRC can't do it. I made a framework for aigrc. It's out there as well. Aigrc because the players in GRC are not there.
[47:29] You don't know all the risk.
[47:30] If you'd have said your risk matrix consisted of this six months ago, Agenic AI just blew you away.
[47:38] So GRC and governance, too many laws, too many regulations, no action.
[47:44] Compliance to what?
[47:46] So reason why I want to address this is because they're going to say, well, he never talked grc. He never talked governance or risk. Yes, I'm certified in 80 something Certifications in governance and risk and all this other stuff.
[48:00] So that is the GRC part of the podcast. I didn't want to make. I want to make sure we didn't overlook that.
[48:07] Pamela Isom: I think GRC is important. I'm AI Governor's leader and I think.
[48:12] Stephen Pullum: Well,
[48:13] I'm not saying that bad to you. I'm just saying that the frameworks that are out there now are changing so fast that if somebody wants to get into this field that,
[48:28] you know,
[48:29] it's difficult without a prior GRC background, you're just not going to jump at the argic. No one is. You gotta have some kind of GRC background.
[48:39] Pamela Isom: Yep.
[48:40] That history is a big deal. It's important.
[48:43] Technology processes. All this just repeats itself. So we need to hold on to that, that we're learning, because it's going to come back around. I agree.
[48:51] Stephen Pullum: Well, we hope not, because the Terminator came out in 84 words of wisdom for me.
[49:01] Keep at it.
[49:03] Basically, just keep at it. I retired in 2012,
[49:07] and since January of 2023, everything I've done on LinkedIn has been in the last three years.
[49:13] I don't have to.
[49:15] But if you don't love what you do, don't do it. I do it because fun.
[49:20] You did this. You did this. This is fun.
[49:23] Why shouldn't it be? You don't know if you're going to be here tomorrow.
[49:26] So enjoy the ride.
[49:29] This was a ride. If this was a roller coaster,
[49:31] man,
[49:33] this is great.
[49:35] Some people.
[49:36] This is great. This is better than Six Flags.
[49:39] This is great because every day you're up, the next thing you know, you're down and sideways.
[49:45] That's the way I look at it. That's the way I look at it. I get up in the morning, I'm like, ah.
[49:49] So I got a new ticket today. Let me see what ride I'm gonna ride today.
[49:53] So sometimes a long ride, sometimes it got circled and loops,
[49:57] and that's all just enjoying it. You just enjoy it because you do not know the tech that's coming. You don't know what's in the labs. You don't know what these tech companies got in the labs.
[50:08] You have no idea.
[50:10] And if you trim stuff like I do, you'd be like, oh, man. Oh, God,
[50:15] this is going to be a ride. And so what I do is when I get bored,
[50:20] I bring out the trusty game controller and I equalize the world.
[50:27] That's life.
[50:28] That's life. I've got seven great grown children and five grandchildren. What?
[50:34] Enjoy the ride.
[50:36] Pamela Isom: I like that. Well, thank you very much. So we are.
[50:39] I'm so happy that you joined me today and it really was a good discussion, some good things that we talked about that. I hope the listeners will pay close attention around everything that was said today.
[50:52] There were some things that were said that you should contemplate over and then make some decisions. Right. So have a great listen to this and thank you again for being here.