
AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E027 - AI or Not - Andrea Bonime-Blanc and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
In this episode of AI or Not, we are joined by Dr. Andrea Bonime-Blanc, Founder and CEO of GEC Risk Advisory and a global leader in governance, risk, ethics, and technology impact. Andrea shares her fascinating journey from a Wall Street lawyer to a C-suite executive and her ongoing efforts to bridge the gap between technology innovation and responsible governance. Delving into topics like cybersecurity, ethical AI, and exponential governance, Andrea highlights the importance of embedding a responsible tech culture, integrating stakeholders in technology loops, and future-proofing organizations in a rapidly evolving digital landscape. With thought-provoking insights into the accelerationist vs. decelerationist debate and a focus on interdisciplinary collaboration, Andrea provides a roadmap for fostering innovation without compromising safety and ethical considerations. Do not miss this engaging discussion about the future of governance, technology, and leadership.
This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host. Guest views may not be those of the host. Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. We have a unique and special guest with us today Andrea Bonim-Blanc. Andrea is founder and CEO of GEC Risk Advisory and she is a leader of governance, risk ethics and impacts of emerging and exponential tech and impacts of emerging and exponential tech. We have quite a bit in common and I certainly value you and your amazing credentials, so what I'm going to do is ask you to tell us a little bit more about yourself, andrea, and first of all, welcome to AI or.
Speaker 2:Not. Thank you so much, Pam. It's a real honor to be here. Thank you for inviting me to be part of your wonderful podcast and I look forward to our conversation.
Speaker 1:Yeah, Pam, I'm excited to have you. So will you start out by telling me more about yourself, your career journey, which is fascinating. What's driving you to do what you do today? What's driving you to do what you do today and what you got planned for tomorrow? Tell me more.
Speaker 2:Boy, oh boy. Well, that's quite a great question. So I'll start with the past, today, and then I'll end with the future. So my professional background was that I got a PhD in political science and a law degree at the same time. I opted for the practical route of becoming a practicing lawyer, but I always had a passion for the systems thinking that goes with being a social scientist. So that kind of informs a little bit about what I'm doing today and tomorrow.
Speaker 2:But I started my career as a lawyer on Wall Street for several years. Then I went in-house as a general counsel and as a senior executive in charge of a variety of different functions, in addition to being GC. For about 10 years I also supervised enterprise risk management, corporate social responsibility, ethics and compliance and a few other things. And I was in four different companies almost 20 years as a C-suite executive. And in the last company I was given responsibility for cybersecurity, which was at the time a new and emerging field to be in charge of, and I panicked at first. But then I realized I had to learn a few things that I didn't know, especially technological things. But I did realize in that exercise, which was a very, but I did realize in that exercise, which was a very, very valuable opportunity for me to learn more, that cybersecurity was, in its essence, governance. It was about risk management, about good governance and about having the right people and the right resources to be able to do it. So it was a very, very valuable educational experience for me. So it was a very, very valuable educational experience for me, and at the end of that exercise I was really intrigued by the whole tech scene and the whole tech risk, tech ethics and the sustainability of a variety of different parts of a corporate or organizational activity.
Speaker 2:But cybersecurity was definitely part of what I did, and one of my first gigs, so to speak, was for the conference board. They asked me to do a research study on emerging best practices in cyber risk governance, and that was a great platform to talk to a bunch of great companies and learn more about it and then also be able to share some of that. And I ended up starting a course at NYU at the School for Professional Studies Center for Global Affairs, where they had a master's program in cybersecurity and national security. So I contributed a new course for them which was about cyber leadership, risk and governance, or resilience, and so I've always had that going on. I've served now on a few boards Cyber Future Foundation, as well as WireX, which is a for profit and what I've done is I've maintained that interest in the new technologies that have been emerging and back in 2017, I co-authored a book on AI, which is called the AI Imperative, which was looking at some of the early strategic and governance issues that we needed to think about as board members and corporate executives and managers in terms of AI.
Speaker 2:So, throughout this process of the last 12 years that I've had my own business and I've been giving strategic advice and governance advice on these issues, I've also been a member of the NACD and of the Athena Alliance and I've done a lot of extra article writing, speaking, really trying to push the envelope of understanding these new developments. And so, coming to today and tomorrow the part of your question, I'm intrigued and both troubled and excited about what these new technologies can do for us. The opportunities are explosive, as we all know, especially with generative AI coming online and how it interconnects with all these other technologies like biotechnology and compute and computing, supercomputing and edge computing and Quantum. One day will probably eat all of our lunches, but in the meantime, all of these things are extraordinarily exciting because they will push the envelope of what we can do to improve health and education so many other things. But unfortunately, just like with every other technology, it's got dual use, and so the downsides can be really serious as well, and so those that are developing some of these very serious and important large language models and all the new products and services that are coming online.
Speaker 2:We need to think about our stakeholders responsibly, whether they're shareholders or customers or users or children, the vulnerable people who are not necessarily getting the full benefit of this.
Speaker 2:We need to think about all of those implications. So, for me, I have a passion for thinking about how the stakeholders can be in the tech loop, and that's one of the things I talk about in my new book so I've just finished. Hopefully is to keep helping whether it's people, organizations, even governments and agencies, to think creatively, effectively and responsibly about the implications of these new technologies and how we can create guardrails without stifling innovation, without putting too many limits on the great innovators that are out there, but, at the same time, some of those great innovators don't really think about the responsibility piece, so we need to have a coming together. So that's my big focus. Going forward is bringing what I call in my book the tech masters of the universe together with the tech guardians of the universe, so that we're talking to each other and we're not just competing or, you know, talking away from each other. So I'll stop there.
Speaker 1:That's great, yeah, so I'm going to take what you just said, since that's such a fascinating background and where you're going is intriguing. So how about I take that conversation that you just had and let's pull a few threads. What you just described a moment ago, you talked about the need for responsible tech. You mentioned the tech masters and the guardians, because they go hand in hand. You mentioned exponential capabilities and how the technologies are emerging and that it is exponential, but we know that we have to have guardrails. So let's talk some more about that. Tell me what you see as responsible tech and what type of actions do we need to take to make sure that the tech evolution is more responsible?
Speaker 2:Yeah, you know, I think one of the things and maybe this is because I'm sort of trained to think this way and maybe I'm just, you know, originally just this way bit of a risk manager, governance person, ethics person I've always talked about this outside of you know, the tech world. I spent many years as a chief ethics and compliance officer in several companies and tech really wasn't really a big part of that. But now tech suffuses everything and you know, people who say that they're not a tech company are mistaken. Everybody's a tech company now. So I think what we need to think about is some of that old-fashioned thinking needs to be updated and refreshed, for sure, in terms of the ethics and compliance, for example, frameworks that we've had and are part of sort of the larger company's way of doing business certainly the better company's way of doing business. We need to refresh, update and integrate that thinking with the way we create tech products and services or the way we acquire whether it's data or algorithms or software or hardware and we then integrate it into our own supply chain, our own product creation, development, design, et cetera. And so we need to have what might be called the old-fashioned great leaders or good-to-great leaders in place in the first place as CEOs, fashion great leaders or good to great leaders in place in the first place as CEOs. And we need to have board members that actually are savvy about these issues not just the ethics and compliance and the governance and the risk but who are savvy about tech, who understand it better than your average person like me, for example, who's learned it on the job but is not a technologist, not an engineer, not a scientist or mathematician. So I don't have those lenses, but I do have these other lenses, and so I think it's extremely important to have the right leader in place who is open to learning and understanding from people that have the chops and the skills that the CEO doesn't have, the top leader doesn't have, and same thing for the board.
Speaker 2:You need to have a board that is diverse and representative of the world we live in and are going to live in, not the world that we used to live in, and so we have to have those technologists, those scientists, those cyber experts who are capable of also being board members, being on boards. We need more than just one person like that. We need several people on every board who can think laterally, who can think in an integrated 360 kind of a way, and who can ask those really important questions about whether certain activities are taking place around the acquisition and integration of data or the design of a product or service. And so smaller things like not smaller really, because it can become big, but small things like do you have an ethicist involved in the design of your new products? Do you have someone who's talking with and collaborating with the engineers and the software people in creating those new products, in bringing the data, knowing where the data is coming from, all this stuff that relates to big data, algorithms, generative AI, et cetera. Do we have those savvy people who are ethics, people who are working at the inception and during the development, and so we need board members and CEOs to think about these things in the same way? People who are ethics, people who are working at the inception and during the development, and so we need board members and CEOs to think about these things in the same way. Do we have those interdisciplinary teams that have eyes on? That's how you integrate responsibility.
Speaker 2:It's not by saying these are our six principles of responsibility for AI and you know it's basically tech washing. You put a beautiful thing on your website and you say you do it. Well, let's get a little deeper into that. Are you actually doing it? Do you have all the mechanisms to prove that you're doing that? Do you have auditors, monitors, people who evaluate these things, and can they then represent and report up to the board and to the C-suite that these things are happening? So it has to come from the leader in the first place. It has to come from the board in the first place, and this is why it's so important to have a really savvy board who can hold the CEO accountable on this stuff. It's a very similar story in the past. Do you have an ethics program? Do you have a sustainability program? Well, do you have a tech responsibility program? Are you actually doing the things that you're supposed to do to integrate the responsibility, or are you just creating another form of greenwashing, which we can call tech washing right?
Speaker 1:So, as you were talking, I was thinking about an experience I had this week actually, and I am on several boards. I'm either on an advisory council or a board that's not for profit. So one of them I feel oftentimes like the technical person who is the sidekick oh yeah, so we'll talk about this, this and this and this, and, oh yeah, the technical piece, right. And so this time, this last meeting, I just basically let him know that you want to be more effective, you need to think about technology, you need to think about the technology director, we need to think about what our technology strategy is. Doesn't seem like we have one. Maybe we can lean on those that have technical savviness to help pull together a tech strategy, but where is it? What is it Right?
Speaker 1:And so the eyes came open and we're rocking and rolling, and so I think that you mentioned something. You said we need a savvy board, and back in the day, maybe you would feel like that, if you are technical, that you've got what you need right, you just need the soft skills, you just need it. That is so old school, absolutely. And as you were talking, I thought about all that I've literally had to go through that with the boards that I'm on and point that out, because we think that when we get to that level that, hey, we don't need this.
Speaker 1:Well, maybe you don't need to be super, super technical, but I don't know how you can provide oversight if you don't have a sense of what cybersecurity and technology is all about, right, and if you're not in tune with how technology is emerging, how are we effective board directors, right? So you've pointed that out, so I like that. I think that leaders should be tech savvy, evidently, and that we need to refresh and update our approach, and I love how you pointed out that we want to integrate, talk about and update the way we integrate and the way we acquire technology. That's pretty cool, so thank you. So here's my next question how do you see the impact of AI from the perspectives of culture, generations, and if you can tell me more about your perspectives on the debate that's going on in Silicon Valley.
Speaker 2:Yeah, wow, I just had a very interesting off-the-record chat with a group of women the Athena Alliance of women, the Athena Alliance Yesterday. We talked about the accelerationist versus decelerationist sort of tech culture issue that's going on and I've written about it also in my book. And you have, on the one hand, you have Marc Andreessen and some of the technology billionaires who are talking about we need to grow and we need to create and innovate without any guardrails, and there's something he wrote called the Techno-Optimist Manifesto, which I encourage everybody to read, and there's things there that really kind of made me jump a little bit, because he basically says the enemy is, and he mentions all of these different things that are what I do, like ESG and sustainability and risk management and guardrails and all this stuff, and so I oversimplify what he says in his techno-optimist manifesto. But there's a real truth to it, right? And he's not the only proponent of this. Clearly, we have Elon Musk and Sam Altman and many others who are really out there to sort of innovate no matter what, and, yes, some guardrails are being put around them, whether it's from regulators making noises or other stakeholders making noises. But then you have people. I really love, the Center for Humane Technology, which is run by Tristan Harris and Asa Raskin, and these guys started I don't know exactly how long ago, but several years ago they started this and they're famous for having put the Social Dilemma movie together, which basically talked about all of the lack of guardrails in social media, which is now we all bemoan the terrible things that have happened through social media. And early 2023, they put together a YouTube conversation called the AI Dilemma and it actually opened my eyes wide open when I was I hadn't started writing my book yet, but it gave me a lot of fodder on this other side of you know this decelerationist conversation.
Speaker 2:Decelerationist is a little bit of a pejorative, and the techno-optimists also like to talk about people who are looking for the governance and the guardrails as doomers. You can see it also within the whole open AI saga that we've seen over the last year or so, where Sam Altman gets fired by the nonprofit board, which is concerned about safety, and then he comes back, and there've been all kinds of other developments at OpenAI over the last year, but the point being that there's this struggle between those who are thinking about governance, guardrails and stakeholders and those who just want to innovate and use as much compute power and capability as they possibly can, and I really think there's a middle ground here. I talk about this a lot in my book in terms of we don't have to be extreme over here or extreme over here. We need to be talking to each other and having that interdisciplinary dialogue and respect for each other, and I feel like there's a lack of respect pretty much in one direction.
Speaker 2:I've written a couple of pieces about this. I talked about the diversity imperative in exponential technology. I wrote a piece for Diplomatic Courier on this, in which this cultural divide also exhibits itself, unfortunately, in very powerful, very wealthy white men over here and everybody else over here meaning women, mostly people of color, the less powerful and less wealthy. So we don't want to have that. That's not good for society, it's not good for democracy, it's not good for a lot of things, and so through my book, I'm trying to also sensitize people, especially in positions of power leaders of companies and others board members to think about the broader stakeholder community when we're talking about tech innovation. And, by the way, I'm the first person in my family who has always bought the latest tech. I didn't go online on a physical line to buy any of the iPhones, but I was the first one to order it and check it out. So I love innovation, I love new technology, I love to know what we can do next.
Speaker 2:But it's something where this culture war that we have which is also a reflection of some of our political polarization unfortunately have, which is also a reflection of some of our political polarization.
Speaker 2:Unfortunately, it's not helpful to anyone and it hurts many more people than it helps, and so we need to come up with constructive ways to bridge the gap. I think is really, and so I mentioned the tech masters of the universe and the tech guardians of the universe, because that came up as I was writing the book. I think it's true. And then there's a lot of people in between who are worker bees, right, who are the engineers and the ethicists and everybody else who's working hard every day to try to get this done right. But there's a lot of corners being cut, and we haven't even talked about the I don't really have a cute name for them but the people on the outside of mainstream society, the criminals, the gangsters, the negative nation state actors that are out there looking to use these tech tools as weapons, and so that's a whole other kettle of fish we need to be aware of, right?
Speaker 1:So I agree that there are varying cultures. There's varying perspectives that must be taken into account. Your cybersecurity expertise and earlier, when you said that really, cybersecurity is good, strong governance that's how I feel. That's why I have always stressed that there is a requirement that we all understand how to secure, from a physical security and a cybersecurity perspective, our assets. I don't lean and rely totally on a CISO, because they can't do it by themselves, and we should be ever learning and coming into the knowledge of how to better secure our assets, because our assets, whether it's with your employer or not, they're a reflection of us. We need it right, and so that's why I think cybersecurity is so important, and you said that earlier and then, as you were talking through your perspectives of culture, the generations, the AI debate in Silicon Valley, I kept thinking about that and governance at large, because what you described is governance at large and how governance is emerging, just as the things that we have to govern is emerging, so that is brilliant. That's a really good analogy there, because we do have to think about that and our governance models need to evolve and keep up, and so I think that's just really good.
Speaker 1:So can you tell me more? You did mention your book, so is there more that you can tell me? And, as you're kind of talking about it, can you tell me what motivated you to write your book? If you haven't already, what can I expect when it is published?
Speaker 2:Well, so I am very relieved that I was able to deliver my manuscript to my publisher at the beginning of September. It's going to be about a year before it gets published, because it's a university press and there's peer review and lots of production over the last six months or so. Was it needs to stay relevant in the face of this multiple fire hydrants not just fire hoses, fire hydrants of tech, change, innovation events taking place, et cetera, and so, at the end of the day, what I was trying to do with this book is use my lenses, because I don't have anyone else's lenses, so I have my lenses of governance, ethics, risk, impact, so I have those lenses, and I think there's a bunch of other lenses that need to be brought to this conversation and to illuminate what especially people like me who are not technologists and the scientists but I think all of us really benefit from talking to each other, bringing those lenses in to help develop the governance of tomorrow, the ethics of tech, how this all impacts from a sustainability standpoint. There's all that big discussion taking place on the energy needs and water needs of all of these data centers and expanded compute capacities, et cetera, and so my background is more in understanding those things. But those of us who are like me, we have to update ourselves, we have to really go with what's coming at us here and not just sit back. And I'm of an age where there's a lot of people of my age are serving on boards and stuff and I'm serving on a couple myself but the point is we can't sit back anymore, we have to lean in on this stuff.
Speaker 2:And so in my book I try to provide that kind of framework A to understand the context of our time. So I start the book with some megatrends I've written about megatrends before, but with a lens on the technology impact on things like socio-ecology, geopolitics, leadership, trust and the state of capitalism and the economy. So, big picture megatrends. But then I go into a second part of the book that looks at five major groups of technologies, just to sort of scratch the surface for those who are not experts. And so I lent my body to science to try to understand some of these things and then explain them in English back to people that are not technologists. So I've done that in part two and then part three is where I really sort of apply my lenses directly and I have what I call the exponential governance mindset. It's five elements that I think we can all use and adopt and think about as we do our work in our companies or in other kinds of organizations.
Speaker 2:The last two chapters, the last part of the book, is really then saying how do we future-proof ourselves as people and professionals and our organizations? That's one chapter, and the other one is about future-proofing our global commons, everything that we share in this world. How do we future-proof the world that we all share? That is a world that's very fragile right now for many reasons, including climate, of course, but also technology. It can overnight create some exponential risk that we fear. Hopefully it won't happen, but it could. So that's the arc of the book, and it's really a journey to try to get people like you and me to think about this stuff a little bit more systematically and then start bringing their talents and skills to the dialogue and that goes back to the generations again, so respecting the various generations and looking at things from a generational perspective.
Speaker 1:And then you mentioned something about exponential governance mindset. I think you said that's one of the chapters.
Speaker 2:It's a framework that contains five elements, which are five chapters, five chapters.
Speaker 1:Can you talk about that a little more, of course? I mean, I got to wait to get the book, but tell me what you can tell me. It's going to be a while before we get there.
Speaker 2:So everything for me starts with a good leadership and good governance. So having the right people in charge, because, at the end of the day, what they do and what they say sets the tone for the rest of the organization and if there's a bad culture, it's a reflection on those leaders not paying attention or not doing what they say they're doing, et cetera. So the first chapter I call the first element of this mindset. I call leadership. But then the more sort of explanatory piece I'm always looking at tech is what I call turbocharging 360 tech governance is what I call turbocharging 360 tech governance. So it's about getting everybody to think about their role in the governance of tech, starting with the early stage designer, who is bringing algorithms and other tech tools to creating and designing products and services in a company, all the way up to the board asking the right questions. So there's like a 360 tech governance that is needed. That's element number one and that comes from the CEO and the board setting the right tones and parameters on all of these things. The second element I call ethos and it's about embedding a responsible tech culture and this goes back to some of the things we talked about a little bit before, which is the ethics and the responsibility that a company applies to how it does business. Right? Do you have the ethics of compliance framework? Do you have a sustainability program? Are you connecting the dots between them? And then how does tech integrate with all that? So that's the second piece is the ethos and the culture of the organization.
Speaker 2:The third one I call impact, and the subtitle of that is integrating stakeholders in the tech loop. So you're a technologist, you know about humans in the loop or on the loop or off the loop, which is the worst, I guess. So I did a little play on words in this chapter, calling it stakeholders, integrating stakeholders into the tech loop. So as we create our programs and our products and services, we have to think about who are our most important stakeholders. How is it impacting, even when they're not customers? They might be users, children are users, they're not customers and how are we accounting for integrating them in that tech loop? So that's the third element.
Speaker 2:The fourth one I call resilience, but it's actually a catch-all word for a world of deploying a poly-risk and a poly-crisis mentality, and what I mean by that is we live in a world where many risks are happening simultaneously. Many crises are happening simultaneously. There's this poly-crisis world that has kind of become popular recently. Well, I've coined poly-risk as part of that kind of consideration, because we have many risks that are overlapping with each other and egging each other on. So that chapter is all about risk, crisis and resilience.
Speaker 2:And then the final element of this mindset is what I call foresight, and it's about unleashing a future forward tech strategy. So when you're putting your business strategy together and you could be a nonprofit, you could be a government agency, you could be a corporation are you pulling together all the elements that you need to think futuristically, to scenario plan properly, to understand what are the options that you're? If you're a company, a business, are you thinking about the startup that's going to come out of nowhere and eat your lunch from a technological standpoint? You know that kind of thing. So developing a conscientious future forward business strategy. So that's the fifth element of this exponential governance mindset. I know there's a lot of words here, but there's a lot of detail and cases and examples that I've put in the book that illustrate these things. It's not just academic exercise, let's put it that way.
Speaker 1:So let me just say, if I go back to my cyber hat, the first thing I thought about is a tabletop exercise where we're going through. Some small startup is now has emerged and they are a threat to your organization. Are we going to deal with this Right? And so, again, those are some of those fundamentals that we go back to because that's IT security speak. Yeah, I mean, it's just naturally a part of the way we do business. There's something I want you to know about.
Speaker 1:I don't know if you are aware of this I brought this up before we started talking but OpenAI has announced changes to its safety and security practices, and what they've done is they've established a new independent board oversight committee and apparently this board committee has more responsibilities than before. So this, this safety and security committee, has more responsibilities. So now their responsibility is extended beyond recommendations. It has authority to oversee safety evaluations for major model releases and exercise oversight over model launches. So I thought they were doing that already, but if I think about our conversation today, this comes to mind. So it says the committee will have the power to delay a release until safety concerns are adequately addressed.
Speaker 1:So your conversation around taking into consideration the poly risks and also the multi-stakeholder perspectives, right, so considering the children and I heard you say something like who are the stakeholders and I wrote down as you were talking who are the most important. You said that part and I said I added who are the most important stakeholders and why? So the tabletop exercises can help us think through that. But I'm curious about this SSC and how do you think something like your vision and your model fits with what they're talking about here, and I know you didn't know about this. So I just want to get kind of like your perspective on. Do you think that kind of gels with what you're thinking? Do you suppose they could use a little bit of insights from you?
Speaker 2:Well, they can take it or leave it. I've been ignored by many over the years. I just hope to make a dent by those who are interested in these concepts. Look, I think what you just described sounds very promising and hopeful and necessary. It's something that hasn't been there, and we all know from the news over the last year or so that a lot of the safety super alignment people and others have left OpenAI because they're concerned about the lack of safety guardrails. The very first one, and most important one, was a chief scientist, ilya Sutskiver, who originally fired Sam, and five days later Sam came back and he was sidelined for the next six months. But he started his own firm, which just got a billion dollar first round, I believe, of financing, and he wants to build a super safe AGI kind of company. But he's going to put safety first and so to me that's leadership and hopefully the governance that he builds and everything else that he builds within his company actually works in that direction. So maybe OpenAI is catching up with some of those concepts, but they lost him and they lost Jan Leike, who also was a very important part of the. He headed up the super alignment team in OpenAI.
Speaker 2:We have a series of anonymous letters and other types of complaints and concerns from both existing and ex employees of open AI that they didn't have a an environment where they could feel safe to speak up about these kinds of things. So to me it goes back to the essence of the ethos. The culture that I was talking about in one of my elements. Is it one that is allowing anyone, and on staff or even subcontractors, to speak up when they see a concern? This is very basic ethics and compliance 101. But if you don't have that in a highly innovative, bleeding edge kind of technology company, you are flying without a net and this is very dangerous to all of us as stakeholders because we don't know what's coming out on the other end of this sausage factory, the black box of AI, right, and so for me it's about. Is OpenAI actually taking this seriously? And maybe they are. They do have a new board and they do have very sort of prominent people on their board and they've brought in some high level executives from prominent other corporations.
Speaker 2:I hope it's more than doing what looks good. I hope it's actually good and I think it goes back to who's on this new safety committee. Are these truly independent folks truly able to get the skinny on whatever they need to get the skinny on, or is it more of a something that looks good, that looks good to the regulators, looks good to the stakeholders, but isn't effective? So to me, check the box. Thank you, yes, check the box. Is it a check the box or is it a real thing? And so if it is a real thing, I think it's exciting and great. But if it's another sort of tech washing kind of activity, then I don't think we're moving ahead. But maybe it inspires others. This is a continuing dialogue, so hopefully this will continue to move in the right direction.
Speaker 1:I would say let's watch and see and see if we can have some inputs along the way. Exactly, you have been sharing words of wisdom and your experiences throughout this conversation, but I always wrap up with that question. So can you please share words of wisdom or experiences that you would like to lead with the listeners and for me?
Speaker 2:Well, thank you, pam. Thank you for this wonderful conversation. I really enjoyed it and you have such a wealth of experience and knowledge also to bring to this conversation. Obviously, you're a leader in your field as well as a podcast leader, so you do a lot of great stuff for the community of listeners. I mentioned that in one of the last chapters of the book.
Speaker 2:I talk about how do we future-proof ourselves and our organizations, but it really starts with each of us right At the end of the day, we are as good as who we are, and if we're not improving ourselves and doing certain things to future-proof ourselves, we're not going to be very capable of dealing with these really cutting-edge and bleeding-edge issues. So I have a part of one of the chapters that talks about some of the personal qualities of the future-proofed professional. Let's say and this applies as much to a board member as it does to people coming up through the ranks, so to speak there's 10 different qualities that I've singled out, but I'll only mention a handful that I think are really important top ones I think we need to be curious and educatable. I think we need to really be lifetime learners and we can't just do a one and done anymore. We have to have lifetime education going on, always continuous learning. We have to be humble about what we don't know and I will be the first one to say to people I don't know that. I don't wanna get into trouble because somebody thinks I'm an expert in something that I'm not. I've turned down business because of that.
Speaker 2:So I think it's important to be humble and I think the other thing that's so important and I think this accelerationist, decelerationist debate that we talked about earlier it's so important to be empathetic. We need empathetic leaders, especially, who listen. They have to make important decisions, and sometimes tough ones, but listen to your stakeholders, to your community, to your customer, to your user. Don't just do things because the regulator's waiting with a hammer. Be empathetic to the stakeholders and I think I'll end with somebody called me this the other day and I suddenly realized this is I'm a systems thinker. We need more systems thinking going on in our organizations, connecting the dots of disparate things. We need people who are I think the humility piece and the continuous learning feeds into this too and we need to think about technology as another, different parts of technology, as systems that integrate and overlap with a whole bunch of other things that we're more familiar with, and we need to understand how to connect those dots.
Speaker 1:I think that that's really great inputs. Thank you for the conversation, and one of the things that I took note of because I was taking notes here is something that I really appreciate is this conversation address governance, but kind of from a different lens, more from an interactive, collaborative approach, the future thinking approach right, so, the future of governance from the boardroom to the practitioner. So I'm so glad because oftentimes we leave out the practitioners and we don't think about the boardroom, we think about the managers, but it isn't that. So I'm so glad that we had a chance to talk about it from that perspective and cover the gamut, and so I would say it's an inclusive governance discussion. Yes, so thank you very much and I appreciate you taking the time to talk to me today and I think it's going to be very valuable to the listeners and I love those takeaways. Bye.