AI or Not

E038 - AI or Not - Evan Benjamin and Pamela Isom

Pamela Isom Season 2 Episode 38

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

The rapid advancement of artificial intelligence has created an urgent need for a deeper understanding beyond basic prompt engineering. In this illuminating conversation with Evan Benjamin, a senior project delivery consultant and AI specialist, we uncover the critical infrastructure considerations that are often overlooked in the rush to adopt the latest AI technologies.

Evan shares his remarkable journey from legal tech expert to AI infrastructure specialist, highlighting how the worlds of e-discovery and generative AI have converged in unexpected ways. What began as attorneys experimenting with prompts has evolved into complex, multi-agent systems that require entirely new approaches to implementation and security.

One of the most compelling insights centers on how organizations approach AI tools—treating them as simple product upgrades rather than fundamentally different technologies with unique security implications. "We're beta testers for OpenAI and Anthropic, but we're completely neglecting our own privacy and security," Evan warns. This cavalier approach extends to skipping essential documentation, such as model cards, which contain critical information about capabilities and limitations.

We explore the evolution of threat modeling for AI systems, examining why traditional cybersecurity frameworks, such as STRIDE or PASTA, cannot be applied directly to AI environments. New frameworks such as MAESTRO (designed specifically for multi-agent environments) and the OWASP Top 10 for LLMs represent more appropriate approaches for identifying AI-specific threats. With new attack surfaces emerging through agentic AI, organizations must adapt their security practices accordingly.

The conversation takes a fascinating turn toward AI literacy, particularly examining how the EU AI Act establishes a higher standard than many organizations currently achieve. While companies claim to prioritize AI adoption, true literacy extends far beyond basic prompt abilities to comprehensive knowledge of the AI lifecycle. This literacy gap presents significant challenges but also opportunities for those willing to invest in deeper understanding.

As we transition from the era of LLMs to what Evan calls "the year of agents and agentic AI," your organization's approach to implementation, security, and governance must evolve in tandem with the technology. Take the first step today by committing to improve your AI knowledge by just 2% daily—whether through videos, books, or articles—and watch how quickly your understanding transforms.



[00:00] Pamela Isom: This podcast is for informational purposes only.

[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice.

[00:35] Neither health, tax, nor professional nor official statements by their organizations.

[00:42] Guest views may not be those of the host.

[00:51] Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed it right now to address issues and guide success in your artificial intelligence journey and your digital transformation journey.

[01:07] We have a unique and special guest with us today, Evan Benjamin.

[01:13] Evan is a product delivery senior consultant. He's a colleague who creates AI nuggets that I'm looking forward to hearing more about.

[01:24] Evan,

[01:25] welcome to AI Or Not.

[01:27] Evan Benjamin: Thank you. Thank you, Pamela. Happy to be here.

[01:30] Pamela Isom: Okay, so to start out with,

[01:32] let's have you tell me more about yourself, your career journey.

[01:37] I'd like to understand more about what's driving you to do what you do today. I see a lot of your communications on LinkedIn, but I know you do more than that.

[01:49] So what's that driver behind that? And tell me what you got planned for tomorrow.

[01:56] Evan Benjamin: Well,

[01:57] I'm glad you asked that, because I didn't always do AI. I was in legal tech. I've been in legal tech for years, for 15 years or more. And I work with Relativity.

[02:10] I started out just doing Relativity ediscovery,

[02:14] and I didn't know that I'd end up in this AI journey.

[02:19] My whole life was helping attorneys with legal tech and litigation technology and Relativity and becoming a Relativity master.

[02:26] I never knew that Relativity and AI would just collide like it did now because of Gen AI. And ever since people got into Gen AI, they thought that Gen AI would be a separate world.

[02:41] But then Relativity said, let's bring Gen AI into Relativity.

[02:47] And even then, people said, what does that mean?

[02:50] I started seeing attorneys,

[02:52] prompting, and even before I came to Deloitte,

[02:58] I started just seeing just attorneys wonder because I would go to these conferences, Relativity conferences,

[03:07] they would start talking about Genai and what's the impact?

[03:12] And at first it's like,

[03:14] is Genai going to replace attorneys? Is it the Gen AI going to.

[03:20] That's the focus. So I would just go to these Relativity Fest conferences and I would just sit there and get the. Not,

[03:28] not so much the fear, but the concern about the impact of Gen AI on the legal industry. Right?

[03:36] So I would just take a very cautious.

[03:39] I would be at the side, on the sidelines, just cautiously look at gen AI and impact. And back then, Pamela, it was all ChatGPT.

[03:50] We didn't hear too much about the other products.

[03:53] So we just became good prompt engineers.

[03:58] That's it. We became good prompt engineers. We didn't hear about agents or agentic. We just heard about be a good prompt engineer. And that's all we did.

[04:09] So fast forward to today and 2025 is a year of agents and agentic AI and relativity is keeping up with it and it's saying how can we use this to help attorneys?

[04:26] But now I started writing on LinkedIn about who else is this impacting,

[04:32] right?

[04:33] What are people missing? It's not just,

[04:35] just let's be good prompt engineers that's important,

[04:39] but what else are people missing? How do they keep up with this?

[04:43] This just lightning round,

[04:47] just jump deep dive into AI, which is now has transformed from LLMs like ChatGPT into agents into multi agents into this new what we're hearing about mcp.

[05:07] So anyway, there's this huge progression of stuff and I'm helping people keep track of that and but you know, I think, Pamela, it's gotta be.

[05:17] I think it's a progression and I think people are jumping from one end to the other without going through the right stages.

[05:27] So my job with these nuggets is to take people, hold them by the hand, just bring them to this,

[05:34] to this current level. That's what I want to do.

[05:37] So that's how I fell into this AI nuggets.

[05:40] And, and by the way, I have a very strong networking technology background,

[05:46] so I can see the infrastructure.

[05:48] I like infrastructure.

[05:51] So I went from Relativity infrastructure to AI infrastructure.

[05:55] That's how I got here today. That's why you're talking to me today,

[05:59] because I believe in building the perfect infrastructure. So let's take it from there.

[06:05] Pamela Isom: I think that that's fascinating and I agree with you. We do need to think about that infrastructure and the whole life cycle. And now we need to think about the life cycle not only from the standpoint of the traditional AI lifecycle and the generative AI lifecycle, but now we need to be thinking about the agentic AI life cycle.

[06:28] And so what you pointed out in your opening remarks is good. I think it's fascinating and I think I'm happy that we have the opportunity to talk today because I do think that you bring tremendous value and tremendous insights.

[06:45] So we touched upon a couple of things there that we're going to get, I want to go back to. But first let's talk about AI and the cloud, because I know in earlier discussion there is,

[06:58] and I've had these discussions too with folks around what should we be Thinking about as we are starting to use the cloud for AI purposes, Right. For these models and the like.

[07:10] So let's start out with talking about that because there are so many products out there, which you said a moment ago.

[07:17] What's on your mind when it comes to choosing the products and the system or model cards? Tell me more about what we should be thinking about. What's on your mind there?

[07:29] Evan Benjamin: Well,

[07:30] as soon as a product comes out and when you mention products,

[07:34] the minute we hear that ChatGPT releases a new model or Anthropic releases a new model,

[07:41] we get excited and we go out and we want to just test it.

[07:45] All we want to do is,

[07:47] I picture just a lot of people opening up their browsers and saying, let's just test this.

[07:53] Let's just see if a prompt. What they do is they compare prompts between products and they take one prompt in ChatGPT and compare it with a prompt in Anthropic or Mistral or anything.

[08:07] They try to see who's faster,

[08:09] who's better.

[08:11] And Pamela, the problem is we're treating this too much like a product upgrade.

[08:17] And we don't think about the impact we're having because we forget that where did everyone get their data?

[08:26] So all the data that OpenAI and Anthropic uses,

[08:32] where did that come from?

[08:33] So as we prompt,

[08:35] we're not thinking about how we prompt, we're just prompting to see if it's faster. But we're adding to their existing repository of data.

[08:44] So I would like your listeners to know we're neglecting privacy and security.

[08:50] We're rushing to.

[08:53] We're beta testers. We're beta testers for OpenAI and Anthropic.

[08:58] But we're completely.

[09:00] We're neglecting our own privacy and security.

[09:02] We're trusting in the privacy and security of these vendors.

[09:08] Okay, so we're using the cloud.

[09:11] We're used to using AWS and Azure for everything else,

[09:16] but we gotta study how they're using AI.

[09:20] So example, AWS has Bedrock and AWS has SageMaker.

[09:27] We learn how to use Bedrock and SageMaker.

[09:30] But do we really do a deep dive? Do we read the model cards of all the products in those tools?

[09:38] Right.

[09:39] And people are afraid to read model cards. I first learned about model cards when I was doing AI governance and I got my CAIO and IAGP and all these certifications they teach you read the model cards.

[09:52] So I would go to OpenAI site. I would read a model card for chat GPT4. Oh, I would read a model card for Anthropic.

[10:00] Some of that is hard to read,

[10:02] some of that is hard to read.

[10:04] And we're looking for the danger signs. But I'm also looking for performance.

[10:10] It tells you if anthropic says of Opus 4 can now run for four hours straight and, and complete a task where before it couldn't do it well, that's a performance metric.

[10:24] I need to know that. Right.

[10:25] It's hard to read a model card or system card from start to finish. You lose interest because all you want to do is open up the product and start typing.

[10:37] So I think we're missing something.

[10:41] We're used to using the cloud, but we're not used to using AI in the cloud. That's a problem.

[10:48] So how do we force.

[10:51] Not force.

[10:52] How do we tell our users that they have to spend a day or two reading about these requirements before they go jumping in and testing?

[11:00] How do we tell our users that?

[11:02] And how do we.

[11:04] Pamela, you know that we have acceptable use policies,

[11:07] right?

[11:09] So how do we put that in our acceptable use policy?

[11:13] Can we mandate that people do a deep dive on that? I don't know.

[11:19] I would love someone's opinion on that because there's people like me who love reading model cards, but I can't force a person next to me to read it and I can't force someone on LinkedIn to read it, but they're waiting for my interpretation.

[11:35] So that means. Yeah, that's why I'm up at 3am every night reading model cards, because I know that people want to be able to understand that so that I take on that.

[11:45] I think we need more interpretability of AI in the cloud and just be. Just because you trust.

[11:54] I trust AWS implicitly,

[11:57] but I'd like to learn more about how they use their models, like their foundation models in the cloud and how secure that is. So I just need to know that for myself.

[12:08] So I hope that answers are they go.

[12:11] Pamela Isom: Do you think they will put that type of insights in a model card?

[12:15] Evan Benjamin: I.

[12:16] I honestly don't know. And I'm. I'll be honest with you, because there's two types of model cards.

[12:21] There's external and there's internal.

[12:25] And if someone, if you grab a model card from a cloud vendor or from some LLM provider,

[12:35] how do you know what you're reading? Because there's a separate internal model model card and then there's external model cards that are made public.

[12:44] So which one are we reading?

[12:48] And will they put all that detail in a model card? I,

[12:51] I Don't know. The model card is a living document that's supposed to be updated all the time.

[12:58] And how do we know that it's being updated all the time?

[13:02] Anthropic. I love Anthropic.

[13:05] They just come, they came out with a Model a system card for Opus 4 and Claude Sonnet 4 this week. It was updated in May.

[13:17] But you can't read that without reading their Responsible Scaling policy also because all that is tied to Anthropic's Responsible Scaling policy, which is a separate document. So, Pamela,

[13:32] how are we going to get people to read all this documentation? And that takes away from their experimentation.

[13:38] Pamela Isom: Exactly. Yeah. So I agree with you in that from a governance perspective, when I, when I hear what you're saying, I think about governance and I think that from a governance perspective, we want,

[13:52] we do need the model cards and if we have, if we want to have a more technical version, we could have a system card. Right. But we need a model card that is explainable and simplistic because I shouldn't have to be a technician to understand the model that I am about to leverage that's available in the cloud.

[14:17] I need.

[14:18] And we, this is something that from an industry perspective, we need to push because what we say is, with all this information there that is explainable, right? Well, it's not.

[14:31] It's not. Right. So seems we need to get more clarification around what we mean by explainability and interpretability and who's to decide that it's interpretable? Right. So it seems like there's some work that needs to be done.

[14:47] I understand what you're saying and what you are experiencing.

[14:51] It seems that there is greater opportunity to look at governance and that governance start to influence and shape what communications we convey when it comes to these models. And I like what you said about product.

[15:07] So you used the example earlier about how you can't look at AI as a product from the traditional sense because the makeup is different, because how Anthropic utilizes its data to form the models may be different than how ChatGPT for instance, utilizes its data to make up the model.

[15:30] So the same prompt is not going to work the same way across the models. And that's why today when you work with the models,

[15:38] I have a preference because I like the feedback from one and another.

[15:43] I don't like how it generates responses. Right. It's trained different,

[15:47] but not everybody is aware of that. So that's another form of governance that we need to get out there because people don't realize that this is why you see those differences, because the models are trained, so they're not.

[16:00] You can't look at them like traditional products.

[16:04] Right. You can't do it. So that is the test and evaluation strategy that needs to evolve, that is the governance and that is the test and evaluation that we are discussing that you brought out that needs to evolve.

[16:17] It needs to keep up with the times.

[16:19] Evan Benjamin: Yes.

[16:20] Pamela Isom: Yeah. So I appreciate your insights. Sometimes I play back what you said in the way that I heard it just to make sure that.

[16:29] Just to make sure that I'm listening and paying attention. No, I mean I really get, I get into these. So from a cloud perspective we kind of covered that and how we should look at the products in the model cards, which I think is very informative.

[16:45] Now I want to go into AI literacy, but before I do that,

[16:50] my question for you and that we can talk about is threat model. Threat modeling. So what are the issues that you are seeing with existing threat models,

[17:02] if there are any?

[17:03] Evan Benjamin: Well,

[17:04] I'm glad you asked this because I posted recently about some threat modeling frameworks and when I, when I wrote about them, one was called Stride and one was called Pasta.

[17:17] Now they're funny sounding names, but do you know, Pamela, these are frameworks that existed for a while and people aren't aware of them.

[17:28] Cybersecurity people would know it. Right.

[17:31] So these are standard threat modeling that I would use maybe in,

[17:37] in certain cyber environments. But that doesn't mean it's going to work perfectly for AI. And the problem is that we're trying to take existing cyber frameworks and lift and shift them to AI seamlessly and you can't do that.

[17:56] You can't do that.

[17:57] So even if you're a red teamer who has experience red teaming years ago in cyber,

[18:06] and I take you and I ask you to red team my AI product like Anthropic or something,

[18:13] you may not be able to do it because you're going to use old threat modeling frameworks and there's new attack surfaces.

[18:19] So every time we do this agentic and AI agents, we have new attack surfaces we don't even know how to mitigate.

[18:27] That's the problem. We have new threat vectors that we don't know how to mitigate because there's no one to one mapping between an existing threat modeling and AI. So if you take stride,

[18:41] Stride is a name of a framework from Microsoft.

[18:46] Microsoft has a threat modeling framework that's great. I can try to use it to red team a new LLM or agent,

[18:58] but it's going to leave certain things out and then I get a false positive because Stride says you're good and it's not. So. There was a new framework that I posted about called Maestro,

[19:11] M A E S T R O and it actually stands for multi agent environment.

[19:18] You know,

[19:20] it was a seven letter word that, that is specifically meant for multi agents. So my question to you, Pamela,

[19:27] shouldn't everyone know about Maestro and stop using these old frameworks?

[19:33] The vectors we have, attack surfaces are big.

[19:37] And number two,

[19:38] I recently told someone, I mean,

[19:41] I was in a gentic class and I mentioned the OWASP. Now everyone knows about OWASP, right? The OWASP top 10.

[19:49] They give the top 10 web vulnerabilities, but they also do the top 10 LLM threats.

[19:58] Very few people know about that, so I've been posting about that because I want more people to know about this. If you're, if you're in cyber, you should be testing your product,

[20:09] your API, your AI using the OWASP Top 10 for LLM.

[20:15] You cannot take the OWASP Top 10 web model and apply that to your AI.

[20:23] You have to do a specific OWASP LLM model and apply that. But again, Pamela, the problem is cyber people think they can lift and shift immediately to AI threat modeling.

[20:37] And there's a difference. There's a huge gap.

[20:40] How do we educate people, how do we educate CISOs that there's a gap? How do we educate CTOs and the board of directors that there's a gap?

[20:52] That's why you see me, I'm just like,

[20:55] that's why I stay up at night. Okay?

[20:58] Pamela Isom: That's what's keeping you up at night. Okay? So to answer your question,

[21:02] I do think that there is a gap. And as you know, I am real big on the infusion of AI and cybersecurity and privacy,

[21:13] but I'm really big on that infusion. And I have done a lot of work around advancing red teaming because AI in the cybersecurity in the AI era is evolving. It must keep, everything must keep up.

[21:29] But cybersecurity has to keep up.

[21:31] And so the attack services, the LLMs, the agents,

[21:35] AI APIs all introduce new vectors. So I do agree with,

[21:41] I do think that there needs to be more applied education,

[21:47] applied literacy activities so that we truly understand when a agent is poisoned, how that happens and how do we overcome. And we also need to maintain the existing mindset of yes, it's going to happen.

[22:05] So when cybersecurity,

[22:06] with our discussions, especially when I was in government it was always,

[22:10] it's not a matter of if, it's a matter of when and are you prepared?

[22:14] I think that that same persistence, that same mentality must exist with AI. And the thing is that with agents, with the orchestration,

[22:25] with the multi agent platforms that we're talking about,

[22:29] it is going to be difficult to,

[22:32] to catch all of the threats, it's going to be difficult to detect them,

[22:38] but it opens the door for advanced blue and red teaming. So do more offensive type of testing work together to understand what some of those threats are and the emerging threats.

[22:53] So I am really big on advancing our literacy and advancing our capabilities in that space. And I do think that the foundations of cybersecurity.

[23:10] I like the CIA, right. Confidentiality and the triad. The triad, yeah, I love the triad, but the triad needs to be updated because those are the main areas that get violated.

[23:25] But how it gets violated is what we need to zero in in the day and time of LLMs and agentic AI threats.

[23:36] And the last thing I'll say to answer your question is OWASP is keeping up. So OWASP has the top 10 for LLMs and they have, they also have one coming for agents,

[23:48] I think. I'm pretty sure I saw one coming for agents.

[23:51] And the, the mo. One of my podcasts, I'm a reviewer of some of the content that you know, I'm a book reviewer and so I in had a discussion with one person who had written a book on mitigating cybersecurity threats for and it was around LLMs and some of the advanced AI concepts.

[24:15] And so I do that on purpose and I go in and I red team the books.

[24:22] I go in and say like yeah, no, no, no, this, this will work, but this needs updates. Right? So have we thought about this? Have we thought about that? And I really love it.

[24:32] So. And that's one of the shows, one of the episodes that you can listen to if you want to a little bit more about that. But I do think O is coming out with one for ages if it isn't out already.

[24:44] Evan Benjamin: That's amazing.

[24:46] Pamela Isom: So did I answer your question?

[24:48] Evan Benjamin: Yes, you did surprise you. You know Pamela, it surprises a lot of people that the literacy is part of the EU AI act, but the depth of knowledge that they want you to have is much deeper than most people think.

[25:06] And I want people to stop saying AI first if people can stop saying AI first and conflating that with AI literacy. Because just because all my employees have embraced AI doesn't mean they're AI literate.

[25:23] According to the EU AI act, you have to know the life cycle and you have to have knowledge of not just the data but the model and the outputs inference.

[25:36] You have to know what could go wrong at every stage. They want an in depth knowledge of AI that I cannot just.

[25:45] I worry that these colleges are training their students in,

[25:50] in prompt engineering and saying we're literate. And it goes beyond that because the average prompt engineer doesn't know about inference,

[25:59] about all the security risk in pre training and fine tuning and deployment.

[26:06] That's literacy.

[26:08] My goal is to educate people. And you know what Pamela? I want to design my own stamp that says AI literate. And it just like kind of like that big web stamp on the website that says trust verified and stuff like that.

[26:22] I want a big stamp that says you are literate.

[26:26] You are AI literate. And that's my goal. That's my goal.

[26:30] Pamela Isom: So it sounds like what you're after is what really constitutes AI literacy.

[26:38] And that literacy is more than skimming the surfaces and saying that because I know how to write prompts, I'm literate.

[26:47] Which I would think that there ought to be a framework that we could look at preparing that talks about the different types of literacy AI is in everything.

[26:59] Right? So how we develop and evolve the literacy and then when it comes to literacy, you started to talk about the EU AI Act. Okay, so I want to go back there.

[27:13] So tell me more about the EU AI act, the areas that you find most interesting and what we should be focusing on if you can.

[27:26] Evan Benjamin: Well,

[27:26] there's article four and I think there's recital 29. So I don't know anyone who's actually read the whole EU AI Act. I actually bought a book that, it's an annotated version of the EU AI act.

[27:40] And I,

[27:41] I just started reading it from, from start to finish and I couldn't do it. I couldn't do it.

[27:46] I mean to read all those articles and all those recitals,

[27:51] there's just no way you can remember. And the annexes, there's just,

[27:55] there's just no way.

[27:57] So it's just a good reference. But if you just focus on.

[28:02] Everyone focuses on The High Risk AI article and the Annex, Annex 3 and the High risk,

[28:10] they just focus on that because they just want to know is their system prohibited?

[28:15] Are they high risk? Can they escape scrutiny of high risk? What can I do so that I'm limited or no risk so I escape scrutiny of the EU AI Act?

[28:28] No one cares about what Article 4 says.

[28:32] About literacy.

[28:33] And there's a great attorney online that I took her class, Louisa Jarovsky, and she,

[28:40] she's the biggest proponent of the EU AI Act Article 4.

[28:44] She.

[28:45] When you read her post,

[28:47] she will almost. You could see her standing in front of you just saying, oh, I can't believe you didn't do this. I can't believe you're not following this. She writes in a way that says you need to follow Article 4.

[28:59] She's one of the few people, Pamela, who will make you feel guilty if you don't follow every letter of Article 4.

[29:08] I'm going to say this, Pamela, I'm not political, but why do we have such a literacy problem here when the EU expects a depth of knowledge in AI that we don't expect here?

[29:24] That's not meant to be political.

[29:27] But there's a big difference between what EU and the US for innovation.

[29:34] Because the US is all about innovation and the EU is all about regulation.

[29:41] So now we need to compromise those. Right? How do you balance regulation with innovation?

[29:47] And I think in the us,

[29:49] a lot of people say there's no roi.

[29:53] They keep looking at roi and what's the value in AI?

[29:59] They don't say that over there. They don't say that in the eu. They believe in the value of. Of. They're not focused on roi.

[30:07] That's why the literacy is so much. They said higher expectations for literacy in the eu. We have to stop being so metric driven. And we don't even have metrics.

[30:18] Everyone says, well,

[30:20] a lot of CEOs or a lot of C suite says,

[30:24] I don't see much ROI for AI. That's why the literacy is 25%. Because if we take it to 80%,

[30:33] what's our ROI?

[30:35] And I think people are just mixing.

[30:38] Everything is just balance sheet driven. And for AI, you can't do that for AI.

[30:43] You have to be AI native and you have to say the metrics will come once you adopt AI. We're trying to create metrics that don't even exist yet.

[30:55] So I think that's the problem. And I think that's why literacy is always going to be lower in the US compared to the eu.

[31:02] And we should welcome that. If we do federal legislation here in this country,

[31:08] we need something like the EU AI act, not letter for letter, not word for word.

[31:15] But we need to adopt their Article 4 provisions for literacy. And I'm going to say one thing, Pamela. I saw an article that described 17 categories that you have to pass before you're AI literate.

[31:33] And it was a new framework that they said 17 categories and 15 components.

[31:40] And I said,

[31:41] not even the EU AI act shows that.

[31:45] And this was published.

[31:47] And can you imagine someone at a big company saying, oh, look,

[31:51] look what I found.

[31:53] And they're still not in alignment with the EU AI act. This is why I wish there was a law that says you cannot publish anything about AI literacy until it aligns at least 50% with the EU AI Act.

[32:08] That's my feeling.

[32:10] Pamela Isom: Yeah, that's going to be kind of hard considering.

[32:14] I mean,

[32:15] it's possible.

[32:17] But when we're in a.

[32:21] So here's what we're dealing with. We're dealing with balancing innovation and managing risks.

[32:31] So that balance that sits in the middle is what businesses are dealing with now and people at large.

[32:40] I have experience and, you know, you've heard about it. Why. So we're so dependent on people can become so dependent on AI. And I personally think that we confuse,

[32:54] like, we should stop saying AI is a companion.

[33:00] So I don't think so. I don't think AI is a companion.

[33:05] My husband is a companion. Right. My colleagues can be,

[33:11] you know, someone like a companion.

[33:14] A study guide is a companion guide, but it's not a companion.

[33:19] AI has no feelings.

[33:21] It has no.

[33:23] A study guide is like, from the study perspective,

[33:27] but AI has no emotions. It has no feelings.

[33:31] It's not reliable.

[33:32] It's not.

[33:33] It's not reliable except for certain things.

[33:37] So this is why you have people that have taken it to an extreme. And then when the model tells you to go and commit harm, you do it as you have bought into the illusion that it's truly your best friend.

[33:54] It's not. It's not a friend.

[33:57] Like. So I personally believe that that's part of literacy that we need,

[34:01] because that leads to mental issues and all kinds of crisis. Now, is AI helpful? Yes. So I would use it in a heartbeat to summarize something for me and then of course, go back and edit it to make sure that it is giving me what I want.

[34:18] I create my own agents. Right. So I would create an agent to do certain tasks,

[34:24] but I would still caution my teens and I do that.

[34:30] Evan Benjamin: We.

[34:31] Pamela Isom: There's only certain things that we're going to use agents for.

[34:34] Right. So I feel like that.

[34:38] That we have to be really careful now. The EU AI Act. So back to what I was starting to say. So we've got this balance that we've got to strike.

[34:48] Some go to extremes.

[34:51] And when we go to extremes, I think is when we start to think AI is your companion,

[34:56] right? Because companions give you good advice. They're there to tell you like it is.

[35:00] AI wants to keep asking, keep you asking IT questions.

[35:05] That's what it does, right? So certain types,

[35:08] so agents want you to keep asking questions, keep asking and give you something so you can ask them some more, so you can get some more data and process that data and build and grow more data.

[35:16] Like that's how they built it.

[35:18] It's not a companion.

[35:20] So part of that balancing.

[35:23] So sometimes people feel like that it's. You're negative because you are trying to point out the risk.

[35:30] Where I think the EU AI act,

[35:32] it's already understood that AI is and can be helpful. It's already understood. It's already understood that it's an innovation.

[35:45] They are saying, don't lose sight the risk. It's like what I always tell people,

[35:50] you know,

[35:51] right. Your home is a great thing and you love your neighborhood and you love what you have and you want to tell everybody all about what you got. Well, really, no.

[36:03] Lock the doors and keep some information to yourself.

[36:06] Right?

[36:07] So with AI, it's like the same thing. It's like you can't lean on it for everything.

[36:15] And I feel like with the EU AI act, they're telling us to be,

[36:19] be careful, understand that there's innovation, but also understand that there are these risks that we have to be mindful, like lock your door, right? So don't tell everybody your business and things like that.

[36:32] So in the United States, I've been a citizen for all my life.

[36:37] And in this field that I'm in, I've seen regulations come and go.

[36:43] And right now we're dealing with part of the government that is saying we don't need a lot of regulation.

[36:51] So like, oftentimes you hear me talk about lightweight governance and not adding extra burden. I've had clients that have said we want to work with government. So this happened when I was in government and even now as a, as a private sector leader,

[37:07] clients have said, I want to work with government,

[37:10] but I don't want to tell them everything that's going on because they're going to put more burdens on us. Right? So if we open up and tell them like about this risk that happened, this vulnerability that happened, we kind of don't want to because we don't know what we will experience as a result of that.

[37:28] Will there be more regulations? Right. More stipulations? So we're trying to strike. Businesses are trying to strike the right balance. But you can go to an extreme in both directions.

[37:40] You can go to the extreme and just be like totally deregulate. No regulations on AI.

[37:44] That's just not going to work.

[37:46] Self regulations. What will happen is that companies will self regulate, put regulations in place. Some will, some won't, but some will. But then there's no coherence.

[37:56] Right? So we've got to have,

[37:58] we've got to have it some type of regulations, but I call it lightweight governance. But it needs to be effective, right? And there needs to be some type of, some standards and some regulations.

[38:08] And I think that we understand that,

[38:11] but I do think that we're trying to strike that healthy balance. And some would say the EU AI app is too much,

[38:19] right?

[38:21] Some would say,

[38:22] well that's too much.

[38:24] But I like the layout of the different categories.

[38:29] No AI high risk,

[38:32] medium, not a risk, pretty much low risk.

[38:35] I like the categories that they have personally. So I refer to it and I suggest to my colleagues to refer to it while we in the US start to come up with something from a federal perspective that's, that's a little more supportive.

[38:50] Evan Benjamin: Right? And you know, we don't know if it's going to change certain parts the EU AI act, only certain parts become,

[38:57] become effective.

[38:59] So in 2025 only two or three things become effective.

[39:03] Do, does anyone know if anyone parts may change next year? We, we don't know. Like we can't assume that we know what's going to happen with the EU AI act, right?

[39:15] So I, I don't think it's a problem when big tech in the from here tries to influence EU to change their regulation.

[39:28] I, I don't think that, I don't think the US should be telling the EU what to do. I, I don't think,

[39:33] I think you gotta let it happen and let it happen naturally and then try to align yourself with that. Just please watch out for these rogue AI literacy frameworks that are ruining the spirit of AI literacy.

[39:51] According to Article 4. That's how I frame it.

[39:54] Pamela Isom: I think that's good insight.

[39:57] One of the things that's been on my mind is how do we teach our K12?

[40:02] And so because they need literacy too and there is a concern that we don't want our youth to lose the critical thinking skills.

[40:15] So the example that I gave earlier,

[40:18] I was actually thinking about our youth and making sure that those critical thinking still kick in and that we don't lose sight and that we, we help our young people and I mean starting at K and pre K but help them to understand that you still need to apply your critical thinking skills.

[40:42] Because if we just go by the models,

[40:46] we don't know how those models oftentimes are trained. We don't know what data the models are using.

[40:53] So there's governance that we need for sure.

[40:56] But why not teach our young people,

[40:59] don't call the governance, call the critical thinking skills, but teach them early.

[41:04] That you have to not just rely on tools like that,

[41:10] but you have to apply,

[41:12] you know, some common sense and some, some human judgment,

[41:16] Right?

[41:17] Evan Benjamin: And Pamela, I'm going to say one thing, I think certain countries take that too far and we read where China wants to train people,

[41:28] students as young as six years old about AI.

[41:32] But I think some countries,

[41:34] two questions. What's too young and what's too much?

[41:40] So in this country,

[41:42] six years old is too young in my opinion.

[41:47] But, but those same six year olds have to learn hardware,

[41:53] they have to learn what the, the life cycle of AI.

[41:58] That's too much.

[41:59] So two things and two things I want you to think about later.

[42:03] What's too young,

[42:05] what's too much?

[42:07] So would you agree that for K to 12,

[42:11] I mean 13 on up is a threshold? Because I don't see six years old learning the AI life cycle. And when you say critical thinking,

[42:25] do we, can they learn that from prompting or do they have to learn adversarial?

[42:32] Because critical thinking is also adversarial.

[42:35] Prompting. So what do you define as too much?

[42:40] Pamela Isom: I.

[42:41] So, so for instance, I probably wouldn't want my 5 year old or my 4 year old or my 5 year old relative to learn the AI life cycle because it won't make sense to them,

[42:57] right? What we have to do is meet them where they are.

[43:00] So, so, so all of our,

[43:02] all of our itty bitty are using tablets.

[43:05] They may not be on the Internet. They may, like my daughter, she blocks the Internet,

[43:10] but she allows them to use tablets so that they get familiar with how to use tools and they can use her phone better than her.

[43:18] You know what I mean? So our kids pick up things quick.

[43:22] So let's not fool ourselves. They pick it up, right? And so now they like animation.

[43:29] So AI is in the animation,

[43:32] right? So where is the AI in these tools that our kids are using?

[43:39] You've seen it like kids love, like looking at the images that come across the phones. My daughter likes to turn on the images. She says, can we play silly faces, right?

[43:50] And then they turn my face into pretty animals,

[43:57] things like that, right?

[43:59] And so that's the kind of AI where they can start to learn that, yes,

[44:05] this is these, this is a technology behind it. This is a technology that's converting this and allowing it to overlay. This is what's happening with it. With pictures, right? You can take elements of a picture, grab it and put it elsewhere so you can help them to understand.

[44:22] And help them to understand. This is what you would do. This is what you wouldn't do. No, no, no, you don't,

[44:27] you don't make,

[44:28] you know, you don't. Certain things you don't do.

[44:31] Right? So you can start to teach them early and let them know that it's not the tool that's responsible, it's you that's responsible.

[44:42] My granddaughter has one where it shows her how to water the plants, right? How to water the garden, how to grow and how to water the garden. It literally shows her how to,

[44:53] how to take stuff in water and watch it grow and different things.

[44:58] So the critical thinking, so what's too young?

[45:01] I don't think six years is too young. I just think we need to meet them where they are because it needs to resonate. The critical thinking in my mind is about not letting the tool influence them to do something.

[45:15] Like, you see that. If you see, you're like, don't give a plant hot water.

[45:22] Don't lose sight of those kind of things, right? If it's, if it's 100 degrees outside,

[45:28] right? And what do you do about your plant? So I'm talking about those kinds of skills that they need to learn at an early age.

[45:37] And if a tool says to do something contrary to that, they know how to say, yeah, no, let's not do that.

[45:44] Evan Benjamin: Okay.

[45:45] Pamela Isom: Meet them where they are.

[45:47] Yeah. And, and, and I bet we could come up with a life cycle discussion at that age.

[45:53] So,

[45:53] but, but it had. You have to meet them where they are.

[45:56] So that's what I'm saying. And I don't think that our kids are too young because AI is integrated in everything. We have to understand how to,

[46:06] to get it across. Now, am I going to teach them a cybersecurity class?

[46:10] Probably not. But am I going to teach them about protecting information and not sharing information that is about their family with just anybody? You're going to teach them some stuff, right?

[46:27] Am I going to teach them that?

[46:29] If someone calls and says that it's your mom,

[46:32] but. And it may sound like your mom, do you have steps in place to help you to say, no, that's not my mom.

[46:40] That's. You say, you're my mom, you're not my mom,

[46:43] right? You're. I've heard about you. You're a deep fake,

[46:47] right?

[46:48] We have to teach our kids. You can't say that's not relevant.

[46:52] It's relevant. So that's what I mean. So we have to think through what makes sense to meet them where they are, but they have to understand these things,

[47:00] right?

[47:01] Evan Benjamin: And Pamela, I got to ask you, can you teach some bias?

[47:04] Because can they look at the output and say,

[47:09] deep fake is one thing, but can they look at the output and say,

[47:14] how come the output excludes this group?

[47:17] Can we teach them bias at a young age?

[47:21] Pamela Isom: So let me just say this. So if a child.

[47:26] So I use dolls a lot to help me understand what kids are thinking, right? And there's been studies, right?

[47:34] So if a child gets three dolls,

[47:40] one of different cultures, one is white, one is black, one is something else, right?

[47:46] How does the child treat those three dolls? If you say,

[47:50] you all are going to go,

[47:52] there's going to be a party,

[47:54] and you say,

[47:56] there's, there's going to be a party.

[47:58] And so we want to invite the dolls to come to the party and you give them names.

[48:04] What does the child do? Do they allow all three to go to the party? Do they say, no, you have to stay back?

[48:11] Right? And that gives you insights into what's going on with them,

[48:16] right? So that's what I do with my grandkids, right? I have my way.

[48:21] And then I tell my daughter,

[48:23] you might want to keep an eye out on this specific area. So now take that to models.

[48:30] So with models,

[48:32] it depends on what the model is doing,

[48:35] right? So it could, it could be a doll, it could be a model, right? So,

[48:40] but what, how, what is the behavior? Is the plant, for instance,

[48:45] is there a difference in treating the plant if the plant owner is darker skinned,

[48:52] are they making a difference? Right? Do they even notice that there's a difference? And are they doing anything different because of that?

[49:00] Those are kind of things that you kind of watch out for. And I use a very simple example that someone say, that's not true. It is true.

[49:09] It is true, right? So you, you have to meet our Itty bittys where they are and listen. They tell you what's going on.

[49:18] They tell you because they're innocent.

[49:21] So they're going to tell you what they're learning.

[49:24] That's how I do it with my grandkids, right? So I know I have two. So,

[49:30] yes, you can teach our kids bias and how to watch out for bias. More important, you want to know how they address certain situations because they got to handle things appropriately.

[49:46] Evan Benjamin: Very interesting. I love that response.

[49:49] Pamela Isom: So,

[49:50] but we can talk some more about that. I mean, and I'm sure you have your way of, with all different ages and stuff, of how to recognize and identify and how to spot like some things and what to do about it and whether you look the other way.

[50:06] I don't hardly look the other way, but I chew on it and try to figure out what to do. But I really care about it in our K through 12s because they are our future.

[50:16] Evan Benjamin: Yes.

[50:17] Pamela Isom: And you have to be thinking about from my perspective, AI perspective. Like what are they learning?

[50:23] Evan Benjamin: Yeah. I would like to come up with one, a different standard AI literacy education than AI literacy commercial or whatever. I'd like to come up with two different standards so that, so that you can't just say, well,

[50:36] we all have the same AI literacy. I think they should be broken down according to education versus commercial,

[50:44] things like that. So AI lit.

[50:47] I don't know how you say this, but AI lit E or AI lit G, or just different.

[50:53] Different standards for different domains.

[50:55] Pamela Isom: Exactly. And that's what I was saying earlier. We probably could come up with a framework for literacy, digital literacy, AI literacy. We'll talk some more and see what's out there and see where we see some gaps and things.

[51:07] So it definitely needs to. To occur and it needs to be a living framework.

[51:12] Yes,

[51:13] those times are changing. Okay, so we are at the last part of the call first.

[51:19] Before that, I wanted to talk about mcp, but we're going to save that for another discussion. But you mentioned it earlier, so we'll talk about that at a follow up discussion.

[51:27] And then the last question that I typically ask the guest is if there is a call to action or words of wisdom or any other nuggets that you want to share with us as a takeaway.

[51:45] Evan Benjamin: Yes.

[51:47] Call to action is admit how much you don't know about AI and don't just say don't just. I think a lot of people are being AI shamed,

[52:00] especially in companies when they see people getting AI certifications and they don't have the same certification. You don't need a certification to learn AI.

[52:10] The people who are getting AI certification,

[52:13] they really want to learn it. That's just showing their intent and commitment. You don't need a certification,

[52:20] but you do need to make some effort.

[52:23] What's the least you can do right now to learn AI without saying that AI is complicated? What is the least. That's my call to Action.

[52:35] After you hear this taught,

[52:37] what is the least thing you can do to make yourself 2% better than you were yesterday in terms of AI. And I don't care if that means you're going to learn every.

[52:50] You're going to learn how to be a better prompt engineer or you're going to watch a video on agents or you're going to take a class.

[52:58] It doesn't matter.

[53:00] I think there's a lot of people who are just waiting to see what happens and they're not actively doing something to learn AI.

[53:09] So me, Pamela, I love certifications.

[53:13] I have got too many certifications. That's my fault.

[53:18] I love it. Studying auditing.

[53:21] I'm studying many different types of auditing because that's what I want. It doesn't mean that's what someone else wants.

[53:27] So what can you do? Right now my company offers tons of training for AI that a lot of people don't take because they say they don't have time. And I'm telling you, even on a nice day like today,

[53:43] guess what?

[53:44] I'm going to spend at least 30 minutes learn some AI.

[53:48] So what is the smallest thing you can do to learn AI?

[53:52] That's what I'm telling everyone. That's the. That's the call to action. That's. And find someone on LinkedIn, not just me. But you write a lot of good stuff. There's so many great AI content on LinkedIn.

[54:05] Start reading,

[54:06] pick one post,

[54:08] read it and save it.

[54:10] That's your homework.

[54:12] 1.

[54:13] So there's just a lot of good stuff out there. But I think people are saying it's too complicated or I don't have time and they got to stop saying that.

[54:23] That's my call to action. That's my takeaway. And the second takeaway, if you're still down here at the LLM level,

[54:31] make an effort to go to the next level because you gotta. You LLMs are great.

[54:38] You gotta be at the agent level. You gotta be at the agentic level.

[54:43] It's gonna keep changing every day. But this is a year of the agent. Do whatever you can to learn anything about agents.

[54:52] Do not be stuck in the LLM world.

[54:56] That's my call to action. That's my takeaway.

[54:59] Pamela Isom: I think that that's really good. I appreciate that. I'm sure that the listeners will appreciate that. So two things.

[55:06] Get out of the LLM world and move on up the stack. Right? So it's not really a stack, but consider it a stack. Move up the stack to agents and agentic AI.

[55:17] And then the second thing you said is do what you can to learn something about AI. If it's not, if it's, if it's only two minutes, take time out,

[55:29] make time to learn more about AI and build up the AI literacy, correct?

[55:36] Evan Benjamin: Yes, anything. Anything.

[55:38] So it could be a video, a book,

[55:41] anything. And then make it a habit. Every day.

[55:44] Every day. And watch what happens to you after 30 days. That's what I'm saying and we need that.

[55:51] Pamela Isom: So I appreciate that. So this has been a wonderful discussion for the call to action for the listeners.

[55:58] When you listen to this,

[56:01] be sure and respond to the LinkedIn if you can and let us know what you are doing or plan to do after listening to this. I would appreciate that because we take the information and then we do other things to let you know that we are hearing what you have to say.

[56:21] And then last thing, Evan and I, you and I are going to have some follow up work.

[56:27] So this is the beginning of a podcast.

[56:30] So this is part one and then we're planning another one because there was a lot that we still didn't get to cover on this call.

[56:37] So I'm looking forward to a follow up podcast, a follow up episode.

[56:43] So thank you very much.

[56:45] I do appreciate you being here.