
AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E040 – AI or Not – Pupak Mohebali and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
The complex landscape of AI governance demands more than theoretical frameworks; it requires practical bridges between policy and implementation. Dr. Pupak Mohebali, AI policy consultant and researcher with a background in international security, brings a refreshingly grounded perspective to this challenge.
Dr. Mohebali reveals how her multidisciplinary experience shapes her approach to making AI governance accessible. "Most organizations aren't lacking frameworks," she explains. "They're lacking translation between policy and practice." Her AI governance starter kit transforms abstract principles into straightforward questions: What AI tools are we using? Who's responsible if something goes wrong? What data feeds these systems? This practical approach helps teams engage with governance without feeling overwhelmed by complexity.
The conversation challenges the dangerous myth that "AI is just a tool." Every AI system reflects human decisions about data selection, goals, and beneficiaries. By pretending AI is neutral, we shift blame from designers and organizations to the technology itself, an abdication of responsibility that Dr. Mohebali firmly rejects. This perspective connects directly to the ongoing importance of AI literacy, not to make everyone a technical expert, but to empower people to ask meaningful questions about how AI affects their lives.
Perhaps most eye-opening is the discussion of AI's hidden environmental footprint. Training large models can generate emissions equivalent to those of five cars over their entire lifespan, while services like ChatGPT potentially consume 500,000 kilowatt-hours daily. These costs remain largely invisible, particularly when systems operate through remote cloud services. "We need more than incentives," Dr. Mohebali argues. "Environmental considerations must be mandatory in regulations from the outset."
The conversation concludes with a powerful insight: AI ethics isn't a fixed endpoint but a process that continuously questions who defines ethical standards and who benefits. Want to develop a more nuanced understanding of AI governance that balances innovation with responsibility? This episode offers practical wisdom for navigating the complex intersection of technology, policy, and human impact. Subscribe and join the conversation about creating AI systems that truly serve humanity.
[00:00] Pamela Isom: This podcast is for informational purposes only.
[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice.
[00:35] Neither health, tax, nor professional nor official statements by their organizations.
[00:42] Guest views may not be those of the host.
[00:51] Hello and welcome to AI or Not, the podcast where business leaders from around the globe get together with Me Me and share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and your digital transformation journeys.
[01:10] I am Pamela Isom and I am your podcast host.
[01:14] We have a special guest with us today, Dr. Pupak Mohebali.
[01:20] She is an AI policy consultant and a researcher with a background in international security.
[01:27] Thank you for joining me today and welcome to AI or Not.
[01:31] Pupak Mohebali: Thank you for having me, Pamela. It's an honor to be here with you.
[01:34] Pamela Isom: It is. I'm so delighted you're able to join me. So I'd like to start out by asking you to just tell me more about yourself. We had an interesting conversation before we started out this this podcast episode.
[01:47] So tell me more about yourself and your career journey. I'm sure the listeners will be excited to hear about it.
[01:55] Pupak Mohebali: Thank you. Yes,
[01:57] I work at the intersection of AI, ethics and public policy,
[02:01] with a focus on governance, accountability and the social impact of emerging technologies.
[02:07] So my background is international security and political journalism.
[02:12] And I've been drawn to multidisciplinary work from ethics to energy policy to digital rights. And now I focus on how AI systems shape decisions in real life. Things like health, healthcare, migration and public services.
[02:29] I care about making AI more transparent and accountable, and not just in written policies. I care about making it transparent in how it plays, how it plays out in practice.
[02:42] So that's what led me to start building Inc. IQ Hub, which for me is a space for tools and strategies that make a responsible AI more usable and realistic.
[02:54] And my journey, as you asked. I started out years ago as a PhD researcher focused on politics, ethics and governance.
[03:04] At the time I was working on nuclear non proliferation and the ethical use of nuclear energy.
[03:11] So from early on I was thinking about long term risks, power and accountability.
[03:18] Then I moved into political journalism.
[03:22] Most of my work was focused on foreign policy,
[03:24] but I also covered stories on digital repression and surveillance, including how the Islamic regime, the Islamic Republic in Iran, uses digital tools to silence dissent.
[03:37] And that experience really grounded my perspective.
[03:42] And because it was about real people and real consequences, it wasn't just some theoretical matter.
[03:49] Eventually I began noticing how AI was starting to show up in the same context, quietly shaping decisions with very little oversight.
[04:00] And that's what drew me into AI governance.
[04:04] Pamela Isom: All right, well, that's pretty fascinating. I certainly appreciate what you're doing,
[04:08] so thank you very much for that.
[04:10] Tell me more about policymaking. So you have some perspectives on making policy more practical. And you said that you were giving me your background there. Tell me more about that.
[04:24] Pupak Mohebali: Yes. So at the moment I'm building some tools that help people actually use AI ethics principles. So one of the things that I created is an AI Start AI Governance Starter kit,
[04:37] which is a practical guide that helps organizations to think clearly about their systems. So it's not just a checklist of abstract principles, it's a set of plain language questions.
[04:49] Those questions are, what tools are we using that rely on AI? Who's responsible if something goes wrong?
[04:57] What kind of data are we feeding these systems? And what might be missing about fairness, about safety, and about unintended outcomes?
[05:07] So these are really basic questions, but most teams haven't asked them.
[05:12] And I found when I that when you give people an entry point that doesn't feel overwhelming, they're far more lucky to engage with it.
[05:21] I've had people come to me and say, oh, I didn't even realize this tool counted as AI, or we've never documented what data it was trained on or like. That's the whole point, to make the AI governance, to make governance visible and doable.
[05:37] A framework that I rely on and as in, is very useful in my work is the EU AI Act.
[05:43] It's been really useful framework in my work.
[05:47] Even though it's a European law, it's already having a lot of global effects. Companies outside the EU are adapting their systems to meet those standards and it's kind of becoming a reference point for what responsible AI should look like.
[06:02] And so I often use that to help people move beyond vague conversations about bias or harm and start mapping some real risks and obligations here.
[06:14] There are other frameworks out there as well, like nist in the US or OECD's principles. But what I found is that most organizations aren't lacking frameworks. They know their frameworks exist.
[06:27] They're lacking translation, as in, there is a bridge between policy and practice.
[06:32] They see the legal framework, they don't understand it. So what I'm going to do is to make it understandable and practical for them, because that's where most of the struggle is happening.
[06:46] Pamela Isom: You're like that bridge, you sit right between policy and practice.
[06:50] Pupak Mohebali: Between. Yes, bridge between policy and practice.
[06:54] So, yes, I'm going to focus on making governance built, grounded.
[06:58] So if we want AI to be used responsibly. We need to meet people where they are.
[07:04] And that means being clear, being practical and realistic about what's possible,
[07:10] especially for smaller teams that don't have an ethics department to explain everything to them.
[07:17] And so I think it means that we need to make sure governance doesn't become a compliance box for them.
[07:25] It needs to be part of how decisions are being made day to day and not just in crisis moments.
[07:33] Pamela Isom: So first of all, thanks for those insights. Who I like everything, of course,
[07:38] but I do particularly zero in on the governance conversation and how governance is the overarching umbrella of policy and practice. But I like how you are really looking to make governance be that glue that pulls together the policy and the actual practicality of using the tools and making the policy more actionable.
[08:05] So there are frustrations.
[08:08] I talk to people a lot, right. And I have clients. And there are frustrations that policy is just something that sits on a shelf.
[08:18] And there are other frustrations that governance is a burden. Right? It just gets in the way.
[08:24] And so anytime we can,
[08:27] and I do this as well, anytime we can work to make it to where people really understand why this is necessary and why this is important and that it really resonates.
[08:39] So when you say you're focused on making things more practical,
[08:43] making policy more practical means making it more understandable, as you said.
[08:48] But also, why does this matter to me?
[08:51] And that's where I think we,
[08:53] you know, we get these long, drawn out policies and then you don't want to deal with them. They're just great. We've thought of everything.
[09:00] We've thought of everything. And the policy is so complex,
[09:03] which is good, that we have tools that can help us parse them.
[09:07] But I do like that, I do like that, that intermediary that you described, and I just wanted to let you know that I heard you on that.
[09:16] Now I want to go to international security. Because part of your background, you mentioned is in international security.
[09:25] How's that working for you today?
[09:28] What is that? How's that helping you in your AI journey?
[09:31] Pupak Mohebali: It's been really, like, interesting.
[09:33] Going back to what you just mentioned about that bridge, first of all, is that like, I worked in international security,
[09:39] both in policy level and academic level. And then later I joined like, I became a journalist, a political journalist. And at the beginning,
[09:50] like, I was so used to all these political terms and like everything that is like you talk in academia or in policy circles and they're, they are, everybody knows what you're talking about.
[10:02] So you don't have to simplify anything for anyone.
[10:05] Then in journalism it's all the way around. Like as in you have people from all different,
[10:12] like basically backgrounds.
[10:14] You have to make everything bite sized and understandable. So it doesn't matter who your audience is, everybody needs to understand it. And that really helped me,
[10:25] like being an academic and being a journalist, both of them. So I realized that, yeah, okay, if I'm explaining a policy to someone who works in that area,
[10:35] I don't need to explain every detail.
[10:37] But if I'm going to help other people to understand it, then I have to explain it in a way that is understandable for them.
[10:43] So this really helped me in like basically what I'm doing right now,
[10:49] that okay, this is the policy, I understand it, but not everybody out there knows how to use it or understand it. I need to make it tangible for them.
[10:58] And I think AI is also reshaping international security on multiple fronts. So basically we have these autonomous weapon systems and surveillance networks and border technologies,
[11:12] cyber warfare and decision making tools that are being used by the defense departments. But the real shift is structural. I guess. AI changes who holds power, how threats are defined, and what gets prioritized.
[11:30] Pamela Isom: Okay, no, I think that that's, that's good. I see AI in international,
[11:37] international security. I, I see it right. I, I think we need to be careful, but I definitely. Exactly,
[11:44] yeah.
[11:44] Pupak Mohebali: On being careful, I guess like one of my biggest concerns is that is how fast AI is being adopted in security spaces without the same level of oversight that we demand in other high risk areas.
[11:58] So like there's often this assumption that national security justifies secrecy or fast tracking innovation,
[12:06] but that's exactly what we need, where we need the most accountability.
[12:10] Especially when AI tools are being used in context like conflict zones or against protesters or refugees or marginalized communities that really matters.
[12:23] Pamela Isom: Is it a myth that AI is just a tool?
[12:27] Pupak Mohebali: That's a great question.
[12:28] There is that myth that they say AI is just a tool, is very like neutral is just math.
[12:37] I say it is not. It is not just all of them,
[12:41] it goes beyond that.
[12:44] Every AI system reflects human decisions. For example, what data gets used and what goals are prioritized, who benefits and who doesn't.
[12:55] So the way some of the people talk about AI is as if it's just a black box with no controls. But I believe there are decisions that are kind of being baked into every layer of the system.
[13:09] And if we pretend AI is neutral,
[13:12] it lets people off the hook.
[13:15] So it shifts the blame away from designers and companies or institutions and puts it onto the technology.
[13:23] I believe we are as like the designers, the companies, the institutions who build AI, we are responsible if something goes wrong. Technology doesn't create itself, we create it. So I think it's a lot more than that.
[13:39] So we need to stop asking what can AI do? And we have to start asking what are we choosing to do with it and why?
[13:50] Pamela Isom: Okay. I was wondering if I could get your perspective on do we still need AI literacy or are we past that point? Because everybody's talking about AI literacy.
[14:02] Pupak Mohebali: Absolutely. We definitely need AI literacy. And it's very important, it's very vital because.
[14:10] So you don't need to be a data scientist to understand how AI affects your life,
[14:15] like from the way decisions are made about your job, your health, your access to different services.
[14:21] When people understand the basics,
[14:23] like where data comes from, who's accountable, or how bias shows up, they are more empowered to challenge the decisions and push for transparency or just make more informed choices.
[14:37] So it's not like we don't need to turn everyone into a technical expert in AI. We. What we need to do is to make sure people aren't left out of the conversation.
[14:47] So if they learn about it, they can be part of the conversation because then they can make the right, ask the right questions.
[14:55] Pamela Isom: So I feel like business leaders,
[14:58] business leaders should keep that in mind.
[15:02] And if you have provided training to the organization on AI literacy,
[15:09] keep in mind that it's never enough.
[15:13] And the concern that I have,
[15:15] the reason why I asked the question I just kind of wanted to get your perspective on it, is don't become complacent when it comes to AI literacy because the technology is evolving.
[15:27] The literacy needs to evolve as well. But there is also the basics that just because you, you've had several programs that you initiated,
[15:39] doesn't mean that someone doesn't need those original programs. Right. So. And I think that's what I heard from you. I added my own, but I believe that's what I heard.
[15:51] Is that correct?
[15:52] Pupak Mohebali: Absolutely. You're absolutely right. Yes. Yes. And you said it very well that because AI is something that is progressing very fast every day, it means that our literacy is to go ahead.
[16:06] Like, we shouldn't stop the literacy.
[16:09] We don't like if we just know some basics and then we start learning about it. So our literacy should grow as well as the technology grows. Yes, I absolutely believe in that.
[16:19] Pamela Isom: And then earlier you mentioned to me about the energy life cycle of AI and the demands. Can you talk some more about that?
[16:27] Pupak Mohebali: Absolutely.
[16:29] Recently I published an article in Tech Policy Press and it is about AI's climate use, basically the environmental use of AI and why the policymakers need to wake up to AI's energy footprint.
[16:47] So what I say here is that AI is reshaping the way governments operate and deliver public services.
[16:56] However,
[16:58] as this technology becomes more deeply embedded in public infrastructure, critical issue remains that it remains overlooked. That's the vast and largely unmeasured energy demands behind artificial intelligence systems.
[17:14] Because there is a growing concern over the environmental costs of digital technologies.
[17:21] But we lack a comprehensive understanding of AI's full energy life cycle,
[17:27] from training to deployment to maintenance.
[17:31] So the existing data is very fragmented, vendor controlled, and often excludes the essential details such as energy sources,
[17:40] regional variations and long term carbon impacts.
[17:45] So basically I believe government across the world,
[17:51] they are adopting AI tools for a range of applications, for example,
[17:55] immigration control, healthcare triage,
[17:59] like predictive analytics and citizen services like facial recognition and all these kind of different ways that they use it.
[18:09] But this wave of transformation is often framed as efficient and cost saving and innovative.
[18:17] But we know that while AI promises faster decision making and improved service delivery, it also introduces a hidden trade off and that's its carbon cost,
[18:30] which nobody thinks about it at the moment.
[18:33] So I believe the environmental impact of digital technologies have been underestimated, especially when those systems are outsourced to third parties or like run in remote cloud data centers and nobody,
[18:49] as if like we can't see it. So it doesn't exist, but it definitely exists.
[18:54] And so when we train large scale foundation models, e.g. gPT3 or Palm, and they can emit as much carbon as like 5 average cars over the entire lifespans.
[19:12] So like there is a research that says, that research has been done at the University of Massachusetts Amherst and like says running these models known as inference, they call them inference also assumes substantial energy, particularly scale.
[19:30] So there is Another study in 2003 from the University of Washington that found that serving a tool like ChatGPT could use as much as 500,000 kilowatt hours per day, depending on user volume and infrastructure.
[19:47] So this is really big and if we use this in public sector,
[19:52] definitely the consequences are a lot bigger. And so I think first governments,
[19:58] they contribute to emissions without realizing it because it's cloud based, you don't see it, you don't know how much impact it has on the environment.
[20:09] So yes, that's what I think. Like we don't have the full picture, we need to really think about it and it's really dangerous in policy context. So we really need to include the energy and environmental impact of AI as it goes from the beginning in the regulations,
[20:30] rather than thinking about it later when it has affected the environment and it's too late to do anything about it.
[20:37] Pamela Isom: Should there be incentives?
[20:40] Pupak Mohebali: Yeah,
[20:41] that's good because,
[20:44] I mean, who doesn't like incentives? But I don't think it's enough because if just an incentive,
[20:49] some might use use it because of the incentives, some might not.
[20:54] But with environment not being taken too seriously so far we see what happened. There are incentives, they ask people to recycle or do different, like use sustainable products or everything like that.
[21:09] There are incentives, but they're not law.
[21:11] And so far, based on what I see, apparently we need more than incentives for that to happen. So I think it's best to be part of the regulations and being mandatory rather than just incentives, because otherwise it's going to have a really massive environmental impact.
[21:30] Pamela Isom: Okay, okay, that makes sense. I get it.
[21:33] Pupak Mohebali: I might be wrong.
[21:35] After listening to this podcast, there might be some people saying, like, who are you? You don't know anything about policy.
[21:42] How do you know that?
[21:44] But I believe,
[21:46] just as a citizen of the world,
[21:48] that this is really important to be a law rather than just an incentive.
[21:54] Pamela Isom: And you shared some pretty interesting statistics, right? Some pretty interesting research that you found. So good insight.
[22:01] I am to towards the end of the podcast now, so I need to know, before you share words of wisdom or a call to action, is there anything else you wanted to add before you get to your last question?
[22:16] Pupak Mohebali: Global equity is something that is missing in most AI policy conversations.
[22:23] So everyone talks about responsible AI, but most of those conversations are still happening in English in the west by the people who have never been on the receiving end of the predictive policy policing system, or for example, a digital ID exclusion.
[22:41] So I think global equity is really important to be part of the AI policy conversations.
[22:50] That's something that I care about.
[22:52] And I think also we need realism when it comes to approaching AI research regulations. We shouldn't panic. We need to be realistic with clarity and like think about what problem we're trying to solve.
[23:09] Because if we overregulate, it won't help anyone.
[23:14] But if we also have some voluntary codes that are vague, they don't help anyone either. So we need policy.
[23:22] We need policy that is serious, but is also flexible. And otherwise we'll keep just patching some leaks instead of fixing the pipeline,
[23:33] if that makes sense.
[23:35] Pamela Isom: It does make sense. It does. I appreciate the comment.
[23:39] Pupak Mohebali: So there is one thing that if, like I thought, if we talk about whether I think AI can be truly ethical or not,
[23:49] and I think ethics isn't a fixed endpoint,
[23:54] it's a process. So the question isn't that, can AI be ethical? It's mostly who's defining what ethical even means and who benefits from that definition?
[24:07] So there's where the work really begins.
[24:13] Final thing that I have to say is that AI governance doesn't need to be perfect,
[24:19] but it does need to be honest,
[24:22] because it's not just about choosing between fear or hype. We need to be clear,
[24:27] stay accountable, and keep people at the center of the systems we build.
[24:33] Pamela Isom: Exactly. Yeah. No, those are really. Those are actually words of wisdom. So you shared words of wisdom, and I appreciate that. And then I didn't exactly hear a call to action, but I could take it as a.
[24:47] We could take it as a call to action. Right. Because you're causing us to really think things through and think about governance and think things through a little bit more as far as how we can practically put them to use.
[25:00] So I appreciate you joining me today and being a part of the podcast.
[25:08] Pupak Mohebali: Thank you so much for having me. It's really amazing talking to you, seeing your. Seeing you, like, as in. In person, digitally. But still, it's really lovely.
[25:22] Pamela Isom: Thank you so much for being here, and it's a delight to have you.