
AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E034 - AI or Not - Meghan Anzelc, Christina Fernandes-D'Souza and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
What happens when artificial intelligence enters the boardroom? According to governance experts Meghan Anzelc Ph.D., Global Leader of Transformation Solutions, STG at Aon, and Christina Fernandes-D'Souza, it's already there, embedded in board portal software, collaboration tools, and the Microsoft productivity suite directors use regularly. The critical question now is how boards will govern these tools while simultaneously providing oversight for their organizations' AI implementations.
Drawing from their extensive experience implementing AI in regulated industries like financial services and insurance, Anzelc and Fernandes-D'Souza offer practical insights for directors navigating this complex landscape. They emphasize that effective AI governance begins with the board's own practices. Rather than using personal AI tools that might compromise security or reveal sensitive information, boards should establish common, secure platforms that align with their governance principles. This creates consistency while modeling responsible AI use for the entire organization.
The conversation explores fascinating real-world scenarios, including how AI might transform board meeting preparation. While AI can generate and summarize lengthy board packets, the experts suggest a more fundamental approach: redesigning information-sharing processes entirely. They also share a cautionary tale about a Fortune 100 company that implemented a prestigious AI solution only to discover material non-public information being inappropriately disclosed months later, highlighting that even "blue chip" implementations require careful governance.
For boards beginning their AI governance journey, the experts recommend hands-on experimentation, creating a baseline of AI knowledge across all directors, and possibly forming dedicated committees to enhance the board's collective expertise. Their publication "AI-Powered Boardrooms: The Future of Governance" provides a roadmap for directors seeking structured guidance.
Ready to transform how your board approaches AI governance? Start by examining your current tools, establishing consistent practices, and developing the AI literacy needed to provide effective oversight in today's rapidly evolving technology landscape. The possibilities—both positive and challenging—are limited only by our imagination and governance frameworks.
Link to Article: Anzelc, Meghan, and Christina Fernandes‐D’Souza. “AI‐Powered Boardrooms: The Future of Governance.” Board Leadership, vol. 2024, no. 196, Nov. 2024, pp. 4–8. DOI.org (Crossref), https://doi.org/10.1002/bl.30276
[00:00] Pamela Isom: This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice. Neither health, tax, nor professional,
[00:38] nor official statements by their organizations. Guest views may not be those of the host.
[00:46] Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey.
[01:03] I am Pamela Isom, and I am your podcast host today. And today we have two guests with us. I'm excited to introduce Meghan Anzelc, Global Leader of Transformation Solutions, STG at Aon, and Christina Fernandes-D'Souza.
[01:17] Both of these ladies are leaders at Aon. who have a passion for AI governance and risk management,
[01:26] and I will let them elaborate more. So, Megan and Christina, thank you for being a guest on the podcast and welcome to AI Or Not.
[01:38] Meghan Anzelc: Thanks for having us.
[01:40] Pamela Isom: You're welcome. Okay, so I'm going to start with a question for Megan. So, Megan and then, Christina, we're going to come to you right after. Tell me more about yourself, your career journey and your travels that led you into AI governance and all this stuff you're doing with the boardroom.
[01:59] Meghan Anzelc: Yeah, yeah. So really, my whole career has been in AI and bringing those capabilities into organizations. So both in highly regulated industries like financial services and insurance, and then less regulated industries, but certainly no less important like human capital and other professional services.
[02:21] And the idea around, I think AI and governance and how the boardroom should be thinking about it is really a natural extension of that work, of really thinking about how can we help organizations ensure that the ways they're using AI are responsible and ethical and add meaningful financial value to the organization.
[02:45] And that really starts from the top with the boardroom and how the boardroom is thinking about AI and how they're providing oversight and risk management to the organization.
[02:57] Pamela Isom: Okay. And Christina, same question?
[03:01] Meghan Anzelc: Yeah.
[03:01] Christina Fernandes-D’Souza: So my path is quite similar to Megan's. I have over 15 years of global cross industry experience combining strategy, data analytics, AI and technology in both the public and private sectors, really in regulated industries like financial services and in the insurance space, but also like mindset in not as regulated,
[03:27] but equally very important professional services.
[03:31] I think to me, as well as being a natural progression,
[03:35] we are both very focused on helping people and companies understand the opportunities and risks in utilizing AI. And both are very important.
[03:48] Pamela Isom: Okay, so you want organizations to understand both the opportunities and the risks and you're trying to help them with the balance.
[03:56] Absolutely. That makes a whole lot of sense because I always say that myself. Sometimes I talk in keynotes, et Cetera. And it's so interesting because the conversation can go either way and stay that way, like when you start talking about the risk.
[04:14] And I always have to remind us, well, wait a minute, let's talk about some of the good things as well. And then it can also be where you're just talking about all these great things that we can do with AI and how we can use it to build our project schedules,
[04:28] to manage our workloads, da, da, da. And then you can stay there and then you have to be mindful to tell people that now we have to be mindful of the risk.
[04:38] So I imagine that the boardrooms need to really be concerned with this. And so, speaking of which, let's go to the next question. One of the questions that I have for you, Megan, is what are boardrooms getting right when it comes to AI and what do they need to know?
[04:55] Meghan Anzelc: So I think one of the things that board directors need to know is that AI is showing up in the boardroom itself. And many board directors have probably seen this already,
[05:06] where the board portal software that they use to host documents and meeting notes and so on, all of those vendor tools are embedding AI capabilities in those solutions. And then of course, a lot of the common collaboration tools that all of us use, whether it be Zoom or Microsoft Teams,
[05:27] those tools as well are embedding AI capabilities. So really for board directors, AI has been entering the boardroom almost whether or not they like it or not. And so it's really important that board directors start thinking about how to govern and provide oversight for AI in the boardroom itself,
[05:47] as well as of course, their duties to provide that same oversight for how AI is used in the company more broadly. So I think we're going to continue to see AI capabilities be more integrated and embedded in the tools that board directors are using.
[06:04] And so I think it's important for board directors to understand what those capabilities are, what else might be likely coming and then how can they adapt when they find use cases that are really relevant to them and actually a in their duties of loyalty and care?
[06:22] And then where are the places where they don't want to use those capabilities either because there's too much risk or it doesn't align to their approach to AI governance? One very small example might be, you can imagine that it could be incredibly valuable for board directors to have an AI tool that provides real time language translation.
[06:43] So you can envision this for multinational boards where there are people sitting on the board who come from different native of languages and different Backgrounds and that real time language translation could be incredibly valuable to aiding in the board's conversations and discussions and decision making.
[07:04] So that might be one place. And then I'll let Christina add some more color on some of the use cases and the trade offs of how these capabilities show up in the boardroom.
[07:14] Christina Fernandes-D’Souza: Thank you, Megan.
[07:16] I think what boards are getting right is they are exploring various avenues to enhance AI fluency. So they're reaching out either independently or through expertise or having a AI generated AI committee sitting with the board.
[07:37] So I think that's what they're doing well is they're proactively trying to learn again the opportunities and the risk of AI within the board and how they can sort of provide those expertise in terms of risk management to the organization itself.
[07:55] I think I take a step back and think about sort of the use cases. There have been situations that Megan and I have uncovered while speaking to board directors about, you know, how to maximize AI in the boardroom.
[08:13] So, for example, if you look at both sides of the equation, take a step back and see, hypothetically,
[08:21] you have executive management teams using AI to generate large board packets that could even be like 600 pages.
[08:33] And then on the other side, you have board members sort of using AI to summarize and sort of highlight the pertinent information in those 600 pages. We will argue that that may not be the best solution.
[08:49] Are you solving for the wrong problem? So really maybe taking a step back again and saying to yourself, can boards work with their executive teams to redesign? Like, how do you actually redesign those packets where you're not in a loop of constantly going through several iterations to prepare this board packet and then having the board not actually reading the pertinent or most important information.
[09:17] Pamela Isom: That's a good point because I sit on a board and I can't say too much because they might be listening. No, I'm kidding. So when it comes time to reading the materials,
[09:29] I mean, you wait until the last minute when you know you have to read it, and then you're trying to make sure that you get it done before the meeting.
[09:41] And it is a lot of materials to read. And so if I can use the tools, as long as I'm not violating any confidential material sensitive data, if I can use the tools to summarize and just give me the, the main points while I go through later and before the meeting,
[10:02] I tend to do it, but I have to copy and paste it into the AI to get it to summarize a few things for me because I have to, you know, pull out any sensitive content and then get it to give me the highlights.
[10:14] And then if I do that, I have to give it context, right? As far as what kind of highlights I want give it some context. So it just, just doesn't give me the standard content that it would give to anybody sitting in industry.
[10:26] But I tend to agree that that would be a big help and that's a good use case. I'm, I'm actually a user like that. I, I use it to help me with content.
[10:35] So this makes me think about, with everything that you've said, you both, it makes me think about this publication that you have. So you have this recent publication and I would like you to talk some more about it because I know it goes further into some of the things you've been talking here.
[10:52] So give me some more examples and refer back to your publication. And let's start with Megan again.
[11:00] Meghan Anzelc: We recently published an article called AI Powered Boardrooms the Future of Governance. And that covers what board directors should do to govern their own use of AI inside the boardroom itself, what that kind of current state is of AI offerings, then what are those different areas of higher risk and lower risk,
[11:23] as well as some thoughts around how to know if AI is actually the wrong solution. So all of that is covered in the article and is something that we think is relevant to pretty much every board director.
[11:36] We cover some of that current state that we talked about earlier around how AI capabilities are being embedded into board portal software, into other collaboration tools. And then what do we see from the perspective of how board directors should be providing that governance and oversight effectively of themselves.
[11:57] Right. Of how they themselves are using AI in the boardroom. And then what are both those risks and opportunities? So again that, that both sides of the coin of what are the potential benefits and what potential downsides.
[12:09] And then also talking through some of what we would encourage board directors to do as they continue on this journey. So in terms of their own learning, their own understanding of the capabilities,
[12:20] because of course the AI capabilities that exist today are going to continue to mature and evolve. So it's going to be a shifting landscape.
[12:30] So that means whatever you sort of land on today, you may need to revisit and sort of refresh your thinking as those capabilities mature, mature and as there are new tools and capabilities available to you.
[12:43] Christina, anything you feel like I missed in summarizing what we shared.
[12:49] Christina Fernandes-D’Souza: Just to add a little bit more detail though, Megan, you did not miss anything but just going A little bit into detail about how we recommend sort of you experiment with AI yourself,
[13:02] just really trying to understand the nuances around a tool that you're using. So anything from creating your first draft of your emails or your PowerPoint decks, trying copilots to query an Excel spreadsheet,
[13:20] things like that, uploading an article into a large language model that doesn't have sensitive information, and playing around with sort of the prompting, your own prompting expertise. Like are you, for example,
[13:33] creating that correct Persona that you want the large language model to understand where you're coming from, what the task is or what giving it enough context, and what is the format that you want the results in.
[13:50] So really playing around with the AI tools yourself,
[13:54] we definitely recommend getting your hands when.
[13:58] Pamela Isom: Okay. And is there things that the boardroom should not use AI for? I mean, I know that if it's sensitive information, you want to be careful, but like any perspectives on what they need to know from a being careful perspective?
[14:16] Meghan Anzelc: Megan I think one thing is that I think it's very tempting and of course incredibly easy for board directors to just use whatever tool they have available, their phone or on their personal devices.
[14:27] And I think it's also very easy to make the assumption that there isn't sensitive information in the prompt you're using or in the file you're uploading. I would encourage board directors to be a little bit more cautious than that.
[14:41] So even things like you could envision a scenario where perhaps you're going through a CEO succession planning process and you're using an AI tool to help generate ideas of potential candidates.
[14:56] Now, that question and that prompting, you may be able to word it in such a way that you're not giving away that you're considering CEO succession. And of course that's a core duty of the board, so it wouldn't be a surprise to anyone.
[15:09] But you can imagine how the way that you ask those questions and how you follow up and what you do with the information that it provides as output could give someone a clue as to the direction you, as a board director may be leaning in terms of who might be the right successor for the CEO.
[15:28] So you can think through things like that of how might someone interpret the way you're asking questions or the topics you're asking about. Does that give anything away? So I would be cautious about using personal tools, even though it's very, very easy and of course, incredibly tempting to do that.
[15:46] And that's where I think the board should go through an exercise with the appropriate expertise in the room to Evaluate the board portals that they're using or other competitor vendors and see which of those portals has the right capabilities that they want to be able to use and has the appropriate security and responsible and ethical AI frameworks embedded so that you feel like the way that you're using it is safe and protected and you're appropriately managing those
[16:17] potential risks. And then that lets all the board directors use common sets of tools. Right. That's actually in the board portal rather than kind of going off on your own and pulling up a tool or an app on your phone.
[16:30] Right. To just query something. So those are some of the things I would be thinking about and encouraging board directors to think through.
[16:37] Pamela Isom: That makes a whole lot of sense. That boardroom is, it's a critical role. And so the last thing we want is for them to be using a different set of tools because you're going to get different results.
[16:48] So they have to really know and be consistent on what they're doing and kind of govern the tools that they're going to be using and come up with a playbook for the tools that they're going to be using.
[16:59] Because you can, you can get information from one model that says something is okay and another model that says it's at risk. And so how's that going to work out?
[17:08] Right. So I, I, I hear you. That makes a whole lot of sense. Is there anything you wanted to add to that, Christina?
[17:14] Christina Fernandes-D’Souza: I just say that we recommend really working with fellow board directors to create that level playing field of foundational AI knowledge across the boardroom. Really getting the expertise or working with your executive leadership of the firm organization to identify what those low and high risk use cases are.
[17:42] And at an enterprise level, user approved tools that the organization is using in the boardroom. And if you need additional support or information,
[17:53] get that for specific built for purpose tools in the boardroom.
[17:58] Pamela Isom: So what I like about this conversation is you're saying that the boardrooms need to,
[18:06] we call practice what you preach.
[18:08] So in this particular case they can use themselves as the trial. So use it for what we need to do for our business at hand. And the same issues that we encounter, the same concerns, the same successes,
[18:24] all that can carry over into helping us to provide good governance and oversight.
[18:29] Meghan Anzelc: Yeah, absolutely.
[18:31] Pamela Isom: As I mentioned, I sit on a board and at the last meeting that we had, we discussed how the members were so uncomfortable because of everything they were hearing misinformation about AI.
[18:45] It really had to do with misinformation. And so I'm the one on the board that is the technology director, and I also am the AI expert. And so. But you're not supposed to be an expert when you're on a board.
[18:57] I know, I know. Don't. We'll take that up later. So the thing is that I said that I would get together with some folks and potentially host a boot camp, since I provide training, a boot camp on getting them up to speed.
[19:14] And it'll be AI for board leaders and board directors. And I'm going to do it because I was disturbed that we don't have a strategy. We need a strategy. We don't have a strategy.
[19:28] And how are we going to govern the organization if we don't have one ourselves? Right. So if you're disorganized at the top, that is going to ripple into the organization, even into your oversight practices.
[19:41] So that is something that we are doing.
[19:45] And it is because of the fact that I saw this gap and it was just kind of disturbing to me that people were kind of abusing it on their own, but kind of not so willing to use it in the boardroom.
[19:59] So I think your publication is very timely, and so I appreciate it. So I want to go to the next question.
[20:10] So my question is about some things that I am encountering. So I'm just going to go into my recent experience. So I had this. This recent experience where I was out vehicle shopping, and then I was talking to the dealership about the vehicle that I was interested in.
[20:30] I started talking to them about privacy concerns and how are we going to protect my data when it comes to these connected cars and all that. So we got into this discussion.
[20:42] He said, you don't need to worry about us. You need to be concerned about the Department of Motor Vehicles because they give out so much information. So if I buy a vehicle, he would tell me, I want to be careful because they give out so much of the information.
[20:57] I said, well, not sensitive information, right? And he was like, well, it depends. Maybe not sensitive, but they give out information.
[21:03] So then I started thinking about how my experience has been that I was getting calls about the warranty on my vehicle was about to expire, and it wasn't coming from my.
[21:18] The dealership that I bought my vehicle from. And so I get these calls, I get these emails. And so supposedly they got my information from the dealership, according to the dealership, potentially possibly from dmv.
[21:33] So I know that they can give van information and then turn around and maybe trace that back to me somehow. But too much information they had, and they were steady Reminding me that I needed to renew my warranty or, you know, renew my extended warranty.
[21:50] Anyway, so I chose a different path.
[21:53] So I just want to get your perspective since we're talking governance and AI governance. So as the information proliferates because of tools like AI, and the information travels faster and gets accelerated much faster because of AI, so you know, these brokers or the DMV is transferring the information over,
[22:15] probably innocently thinking that it's potentially for insurance information. There's some legal issues that make it okay, and then there's other instances where it's not okay. But what's your perspective on that?
[22:29] I just want to get a perspective as the leaders of governance. So I'll start with Megan.
[22:33] Meghan Anzelc: Yeah, I think, I think you're relaying either sort of a personal experience from your.
[22:38] From your personal life. I think you could take something analogous to kind of the corporate world.
[22:44] And you know, what your story brought to mind was really around supply chain risk and thinking about the risk embedded in using third party vendors.
[22:54] So you can imagine as a corporation,
[22:57] if your company vehicles are all from a particular manufacturer, as an example, and that manufacturer, through connected car devices, collects a lot of information. How exactly to your point of how is that information being used?
[23:13] Who else is it disclosed to?
[23:16] Are they building their own AI models off of that data in some way? So I think that topic of supply chain risk and vendor risk is really critical for boards.
[23:28] And I know they often think about it from other perspectives in terms of kind of like criticality of delivery and ensuring they have components that they need and so on.
[23:39] But I do think there's also this other aspect of it that is about the data side and the AI side and how those different vendors and partners are using information.
[23:52] So I think that points to the due diligence process being incredibly important. So again, back to the boardroom.
[23:59] How can the boardroom set some of the guardrails and some of the principles around the risk tolerance of the organization? And from an organizational perspective, what are they comfortable with?
[24:12] And not to sort of provide that oversight and direction to the organization, and then within the management team conducting that due diligence on vendors and partners in a. In a way that encompasses all these different perspectives on those potential risks sitting in supply chain and those partnerships?
[24:32] Pamela Isom: I think I agree with that. Christina, is there anything you want to add to that? What's your perspective?
[24:38] Christina Fernandes-D’Souza: My perspective is.
[24:40] Well, I'll give you an example. There was this Fortune 100 company that Megan and I talked with and they were following sort of The AI path and they deployed a blue chip brands generative AI solution across the organization and months later discovered that material non public information was being disclosed across organizations to employees that shouldn't have had access to it.
[25:11] And I think it's really important to understand that even though it was a Fortune 100 company and a blue chip vendor, responsible AI is, is at the core of it.
[25:25] The, the solution was not implemented correctly. It was overlooked probably because of the brand behind the name. But really being involved in, in and asking questions around, you know, what is the security that we need to implement?
[25:45] What are the third party vendors? Like Megan said, where is our data going? How do we put guardrails around that data? So if it can happen at that level, it can happen across various size organizations and in various functions and departments.
[26:01] And so really being vocal about asking those critical questions is my perspective.
[26:09] Pamela Isom: I appreciate that. All right, so I am going to ask some wrap up questions here and first of all, thanks for your insights today. I do appreciate it. This is good stuff.
[26:21] We need to understand these concepts better and we need to help our boardrooms. They're the ones that we count on. As we start to wrap up, I'd like to hear from Megan first words of wisdom and from Christina next.
[26:38] Give us that call to action.
[26:40] And remember you can if you're, you're talking to me, I'm listening and taking heed to and then the listeners.
[26:48] So Megan.
[26:49] Meghan Anzelc: Yeah, so, so I think I, I often reflect on. I come across a BBC interview from the mid-90s that was done with David Bowie and they were talking to David Bowie about the Internet and how that might impact music industry.
[27:08] And one of the things that I thought was really fascinating was that David Bowie said effectively, I'll paraphrase the potential of this technology,
[27:18] both the good and the bad is unimaginable. And I thought that was incredibly insightful for David Bowie because if, if you were around in the, the mid to late 90s and remember the Internet, then there wasn't a lot there.
[27:33] And of course David Bowie could not have envisioned something like Spotify.
[27:38] But what he did understand was that there were going to be opportunities that he couldn't imagine today. And so that's something I think really translates to this moment in time around AI of the possibilities, both the good and the bad are unimaginable.
[27:56] And so I think that's really a call to keeping an open mind and continuing to learn and watch to see how these capabilities evolve and how those opportunities arise so that you can make the most of it.
[28:15] Right. And really take advantage of the good side and beware of the bad side and try to minimize that. So I think that open mindedness that David Bowie showed, you know, whatever that was 30 years ago is, is incredibly relevant today.
[28:31] Pamela Isom: Okay, Okay. I think I'll try to be more open minded. No, I'm open minded. Okay.
[28:36] Christina Fernandes-D’Souza: All right, Christina, along those same lines, I would say my call to action would be be open minded and be involved. AI is not going anywhere. This technology, to Megan's point, has been in play for decades now in the insurance industry, has been using it for.
[28:59] For 30, 40 years. It's not going anywhere.
[29:02] Pamela Isom: It's.
[29:02] Christina Fernandes-D’Souza: There's exciting opportunity, but be involved, understand it, experiment with it, educate yourself with it at an individual level within your jobs and as board members and leaders.
[29:16] Pamela Isom: Wow. This has been a great discussion. Very practical, very down to earth, but very needed. So I appreciate it. So thank you all for being here. I really seriously appreciate it.
[29:31] Thank you very much.
[29:32] Christina Fernandes-D’Souza: Thank you.
[29:33] Meghan Anzelc: Sounds great. Thanks again for having us.
[29:35] Pamela Isom: Bye. Bye.