AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E008 - AI or Not - Greg Sisson and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
Curious about the future of artificial intelligence in the realm of cybersecurity? We welcome Greg Sisson, Co-founder and Chief Operating Officer (COO), CI-Discern,a seasoned cyber executive with a remarkable 40-year career in public service, including the US Army and various government agencies. Greg takes us through his transition from government roles to entrepreneurship and shares the inspiring motivations behind co-founding CI Design. Learn how understanding an organization's mission and its stakeholders is crucial in navigating the risks and seizing the opportunities presented by digital transformation. Personal milestones, such as the birth of his granddaughter, also played a significant role in shaping Greg's career decisions.
Discover why data is the crown jewel of the AI era, surpassing the value of hardware and software. We discuss the pivotal roles of Chief Information Security Officers (CISOs) and Chief Data Officers (CDOs) in protecting this invaluable asset. Hear Greg's insights on the importance of a collaborative approach between these roles to safeguard data effectively. Amid the rising tide of data breaches, maintaining consumer trust through timely and transparent communication is more important than ever. Don't miss out on learning about robust data loss prevention measures that every organization should prioritize to prevent breaches from becoming the norm.
Get ready to explore the dual-edged sword of AI technologies, from deep fakes to adversarial AI threats. We delve into the complexities of AI governance and compliance, emphasizing the need for continuous monitoring and human oversight to maintain ethical standards. Greg sheds light on the critical intersection of software bill of materials (SBOM) and AI, and the importance of securing language models against adversarial attacks. Finally, we discuss the shift from being the office of "no" to the office of "know," emphasizing the importance of collaboration, education, and a balanced approach to AI's benefits and challenges. Tune in to gain valuable insights and actionable strategies to navigate the evolving landscape of AI and digital transformation.
This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host views may not be those of the host.
Pamela Isom:Hello and welcome to AI or Not, a podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. My name is Pamela Isom and I'm your podcast host. We have another special guest with us today. That's Greg Sisson. Greg is a cyber executive. He's a US Army veteran. Thank you for your service. He is co-founder and chief operating officer of CI Design. He's a friend. He's a colleague that I worked with at the US Department of Energy. Greg, welcome to AI or Not.
Greg Sisson:Thanks, pam, it's awesome to be here.
Pamela Isom:So let's talk a little bit more about you. Tell me about yourself, tell me about your career journey, how you got where you are today, what caused you to dive into entrepreneurship.
Greg Sisson:Great. I'll try to just highlight the wavetops here. It's been a while, but I actually spent almost 40 years in public service. I started out enlisted in the Army and then went to officer candidate school and got a commission as a communications officer in the Army, retired from the Army in 2004. I was in a training organization at that time. It was called Joint Forces Command and we were in what they called the J-7, which was the Joint Warfare Center be able to retire and go straight into government service and serve in a technical role there helping the mission rehearsal program, preparing large joint staffs to go to Kabul and Baghdad. And I did that for about 10 years and it was an amazing opportunity to continue to give back to the mission, support the mission and training to those staffs.
Greg Sisson:Then I did a short stint in the joint IO range as the Deputy Program Manager. The Joint IO Range is a DOD closed loop range to test offensive and defensive cybersecurity capabilities. And then I was given an amazing opportunity to join the Defense Senior Leader Development Program. It was a two-year program designed to prepare government civilians across the Department of Defense for senior executive service, and so part of that was spending a year at the Navy War College up in Newport, rhode Island, and then I came back to the Pentagon and helped to develop the DOD's first cyber strategy, and then went up to an organization called Joint Force Headquarters DODIN, which is the defensive arm of US Cyber Command, where I was the deputy director of operations, or deputy J3, and then the chief of Staff of that organization. And then, in 2018, I moved down to the US Department of Energy, where I helda number of roles there, but culminated my civilian career as the CISO at the US Department of Energy, which is where we met. Yeah.
Greg Sisson:And so, as far as entrepreneurship, I actually left government service in 2022 and decided to go into industry, and I joined Ernst Young as a consultant, working with energy companies and manufacturing companies to help their CISOs with their cyber program transformation.
Greg Sisson:It was an amazing opportunity and that's where I met our founding partner, our managing partner of CI, discern Dylan Diefenbach, and led the cyber and energy practice in the Americas.
Greg Sisson:He was a partner at EY and late last year, our granddaughter was born and our daughter actually lives overseas and our granddaughter was born in December of last year and I really started to think about what was important and what I wanted to focus on in the next five or 10 years, and so Dylan and I started to talk, and the more we started to talk, we settled on this idea of why don't we go out and take the experience and the people we know and do something that we want to do and focus on our families and focus on things that we want to work on and go out there and help companies?
Greg Sisson:Really, that's where the word, the name of the company, this was all Dylan that came up with this, but it's really interesting and it's a great lead-in to the discussion we're going to have today because the word discern. It's really about discerning perception versus reality and helping companies take a discerning look at risk. And so, as we start to talk about AI and AI risks and benefits, it's all about taking a discerning look, and so that's where the company name came from.
Greg Sisson:That is my journey to entrepreneurship.
Pamela Isom:Well, congratulations, I'm excited for you. I did the same thing similar trail but I just always thank people for their service, so I'm going to do that one more time, because your service is what keeps us safe, and even in the cybersecurity realm, our service in the cybersecurity realm keeps us safe. So I really appreciate it and thank you again. I'm fascinated with what you're doing today, but I'm fascinated with your background because, also, you were at DOD, you served in the military. You mentioned Kabul. I am excited for you and the grandbaby, so congratulations on that. I know that feeling too, and so I want to go into a little bit more discussion around discern, because I like how you said that. So let's go deeper into discern. How do you discern in this digital transformation era? What are some examples of how we would discern risks and how we would discern how organizations can be successful and effective?
Greg Sisson:I think it's all about and this really applies across the board it's about understanding the organization, understanding the mission of the organization and how, as cybersecurity professionals or technical professionals, what our role is in enabling those missions. And in order to do that, we have to talk to the stakeholders.
Greg Sisson:We have to get down to each of the different parts of the department or the agency or the organization and really understand what they're trying to do. You can't put in place security policies, you can't put in risk controls and risk mitigation steps without understanding the potential impact of those steps or those controls on their mission. The other thing that we have to do is really understand and have good asset visibility, and when I'm talking about assets, this will bring through with you, you'll understand. This is I look at data as an asset. A lot of people look at hardware and software as their big asset inventory, but I think data is even more important as assets and having a good understanding and inventory of your data assets, but also being able to classify that data so that you understand how to protect it, where it's located and who has access to it. And by doing that, then you can start to have a discussion around AI and how AI could potentially be a benefit or a risk, depending on how employees or how the organization wants to use certain tools.
Pamela Isom:Yeah, I agree with that. You knew I was going to agree with that. So biggest concern I have is do we understand the assets? And I always talk about the assets, and I do think that data is one of the critical assets that can make or break an organization. From a privacy perspective, you've got to know who has access to the materials, who is able to get access because of AI, and then who's using the data to advertise. So nowadays, because of cookies and all this, people are able to get access to information, use that information and make additional decisions that you probably didn't even think about, like advertising.
Pamela Isom:So I do think that the whole data ecosystem is worth securing and protecting and in order to do so, you have to understand the lineage, you have to understand the actors, you have to understand how they could potentially use it. It's a bit more complicated now than I think we thought about in the past. I think it was always there, but AI has elevated the concerns and the needs for a focus on data and I think, from a CISO perspective which we're going to get into but from a CISO perspective CISOs don't always see data as their primary responsibility. From what I have been experiencing in the past CISO think that the data is the responsibility of the chief data officer. So now I think that this has brought to bear that it is the cybersecurity leaders that literally protect the data, but the data officers help us understand where the data is and the lineage of the data and what are some good practices to put in place to help to protect it. But the two work together. What's your perspective on that?
Greg Sisson:Absolutely. I mean, that's a hand-in-glove relationship and folks, organizations don't have a chief data officer, so who is going to do it if the CISO is not looking at it? And if you look at the cybersecurity triad the confidentiality, integrity and availability- all of those things are based on data. All of those things are talking about the confidentiality of data and maintaining the integrity of data, and then the availability of data so that people in the organization can use it for the purpose that it was intended.
Greg Sisson:So yeah, I think when we look at it from that perspective, cisos absolutely have a responsibility to protect it, but it's a huge benefit to have a chief data officer, somebody that really understands data and can help you from a management perspective and helping to educate the users in the organization about how to classify their data and then, working with the CISO, how to put controls in place around that data to protect it, but also make sure that it is available, if needed, to third parties and to other organizations needed. So I think it's a symbiotic relationship and an important relationship, and I think that chief data officers their position in the organization probably needs to be raised up now with AI and the implications around potential for data loss and those kinds of things. I think organizations need to go back and re-look at their data loss prevention programs and the controls they have in place around data loss, now that we have tools like GPT and those kinds of things, where there is a greater potential for data to leave the organization without the CDO or the CISO knowing about it.
Pamela Isom:Yeah, I agree with that too. So the data breaches I was just in a discussion recently and we were talking about the elevation of data breaches and how it's occurring more and more and that sometimes it's becoming so common until organizations are looking the other way. And there was an experience just recently with one of the mobile carriers, and we found out about it a little late, if you ask me, a little late when finally informed the public about it, and so I am concerned that we need to pay attention to data breaches and inform consumers that this has transpired sooner in the process, and so that's one of the concerns that I have and I think that I get it, greg that this happens because you want to try to get your fundamentals in place first, so when people start calling about their information, that you have a response that you can provide to them. But what's concerning is it was late when we got informed, and so how do we protect ourselves? So, yeah, you have the insurance that you're going to make available.
Pamela Isom:My concern and what I'm hoping that the CISOs and the folks that are involved with digital transformation in general that we understand that information travels quickly and organizations don't need to look the other way. We need to be transparent, we need to be forthright and let consumers know that these breaches have occurred, and breaches should not be the norm. So we need to be doing everything we can to prevent these from happening, to monitor data loss, and that is why I think it's so good to have you talking with me today, because that's one of the concerns that I have and one of the things I really am concerned that the breach of information is sort of getting. Oh yeah, that's normal and it shouldn't be normal. What's your perspective on that?
Greg Sisson:Absolutely. You talked about getting information out and being transparent. I was always a firm believer and was trained that it's much better to get facts out quickly when you have a breach, because otherwise people are going to make stuff up and they're going to make assumptions and then you're going to have to recover from those assumptions. It's just going to be harder from a public affairs perspective, from a potential reputational loss, reputational risk perspective and everything else. So, yeah, I'm a firm believer in getting facts out early and often trying to get ahead of perceptions and those kinds of things, because it's just gonna make it worse for you In the end. You're gonna put more work, you're thinking, you're giving yourself time to prepare and all this, but in the end it's gonna be worse and you're gonna take more time and more resources to respond to things you didn't want to respond to, versus what you could have done by just putting out facts early, and often to the media in the case of a breach.
Pamela Isom:Okay, so here's a question. So, by you being a seasoned CISO, tell me what you see as the benefits and risks that CISOs face. As the benefits and risks that CISOs face and we talked about it a little bit, but I want to go a little bit more into the AI era, this digital AI era so what do you see as benefits and what do you see as risks?
Greg Sisson:I think the benefits are very apparent. There are tremendous benefits around contracting. There's a list of things that are hugely valuable to organizations to not only take high demand, low density parts of their workforce and relieve them of a lot of responsibilities. That can easily be put into a generative AI tool or an AI tool that will help them do tasks and relieve them of those tasks and enable them to focus on other things, and so I think that's hugely valuable, and there's a never-ending list of the benefits of AI in an organization.
Greg Sisson:I think where systems get into trouble is they can't see those benefits when they look at it just through a risk lens and they try to ban it, say we're not going to do AI, we're not going to allow people to use chat, gpt, we're not going to allow access to these tools, because anytime you do that as any executive and especially as a CISO, then inevitably you're going to have shadow AI, and what I mean by that is you're going to have people in the organization. They're going to want to do it because they see the value in how it's going to make their job easier. So they're going to do it. They're going to find ways to do it, and so they're going to find ways to circumvent the controls that you put in place and open up additional attack vectors potentially and open up additional risks that you don't want. So I think the most important thing is to go back to that understanding of the organization and the mission and talking to stakeholders about how do you want to use AI? How have you as an organization identified uses for AI? Now let's have a talk about the risks associated with that and then how do we put controls in place to enable you to use AI.
Greg Sisson:It's similar to the discussion we've always had about personal email and some of those other things that potentially introduce risk into the organization. The response is always well, you can't do personal email. Let's talk about yes, you can, but here's the way you do it responsibly. Our pre-discussion around responsible and that's, I think, the most important word when it comes to AI is how do we do it responsibly? And we look at it through a security lens.
Pamela Isom:I like that. I think I heard you say be more collaborative and be more transparent. I said you want organizations and the CISOs and the organizations to be more collaborative with the stakeholders, be more transparent, more collaborative with the stakeholders, be more transparent and include them in the decision making process as far as how we use AI, and I think I also heard you say which I agree with. I also heard you say let's be more open about discussing the risks. So let's discuss the risks, let's discuss the concerns and let's come up together with an approach for adoption of AI within the organization. I'll tend to process what you're saying and then translate it in my own speak, so you just tell me if I'm putting words in your mouth. But that's what I heard and I think that those are excellent points. Did I miss anything in what you were saying there?
Greg Sisson:No, I think you captured it. And the other thing that I think we have to go back and look at we talked about going back and looking at data loss prevention. I think we need to take a look at all of our policies to make sure that we've now addressed AI and the use of AI in all of those policies, because there's policy implications throughout the organization and especially around training and awareness. I think that employees all they most look at what's the benefit how am I going to be able to do my job better and they don't necessarily look at the risks. They don't necessarily understand how an adversary can now use this to exploit vulnerabilities or take advantage of something and introduce risk, and so I think training and awareness is a big deal and having somebody like you that has a background in AI, that has expertise in this area, in an organization to work with when you're developing your cybersecurity training and awareness, because training is so important.
Greg Sisson:I was just reading an article this morning and hopefully it doesn't take us off track but around what the states are doing to prepare for the elections, and I read there was an article in CyberScoop around what Arizona and Minnesota are doing to train and exercise their election officials to make them aware of how to identify deep fake videos, how to identify social engineering. That's being done using artificial intelligence and I think that's I applaud those two states. I hope other states are doing it and I hope organizations are doing it, because it's going to take training and exercises to get people to be aware of what the risks really are and how they can still benefit from it, at the same time, protect the organization and protect their personal information.
Pamela Isom:Yeah, I read that article too and I was actually happy to see it and to know that that's going on. And I also felt good seeing that, because that's part of what I'm trying to do with my training programs is, I blend AI and cybersecurity together, and that's on purpose, because they go hand in hand, just like data. There's AI, there's cyber, there's data. They all go hand in hand, and right in the middle of that is human beings. So in the training programs that I have, we go over these things. Yesterday I was in a discussion. Greg and I was talking to people about deep fakes and how they aren't always bad. If I'm using a deep fake, that translates something that I've said for an international audience. That's a good deep fake example. But we always hear about the scary stuff. So I try to help folks understand the benefits and the risks, as you talked about, because there are benefits but we can get overwhelmed and inundated with the scary stuff. But there's good and there's also not so good, and people are just people. If people can misuse something, we're going to do it. So the wheat and the tares they grow together. That's what I was brought up. Knowing is the wheat and the tares they grow together, so we have to have stewards in the mix to help keep everything on track, and that's why governance is so important. That's why I teach ethics, that's why we do these things, to help us understand and the way to understand what the states are doing and what we were just talking about. It's transparency. It's full transparency, because if I go to a site, I remember this week I was getting ready to pay for something and you should have saw me looking for ways to check online to make sure this is an authentic site. I wanted to be sure and we have to do that. We just have to do that because of the bad use cases of AI and the harmful bodies. So I think that those are good examples.
Pamela Isom:This morning, I was reading about a good use case of AI which had to do with using drones, and I know we talked about this when we were at Energy. But we're talking about using drones to assess the critical infrastructure and the health of the infrastructure, and one of the cities in California is moving forward with that and they're using the drones. Well, you know, behind that is artificial intelligence, so you know that when I looked at that, I thought, ok, this is good. This is good. This is computer vision and using the computer vision tools and capabilities, which is a great use case. But you know, I thought about the security part, so they just go hand in hand. So I don't know if I know we're dealing with critical infrastructure and CI critical infrastructure. Energy is one of those critical infrastructure components. So it made me think about that. What do you think about the drones?
Greg Sisson:Yeah, I think you were spot on. Drones are extremely valuable, especially from a physical security safety. Safety is huge. I mean, why put a human on a large transmission line, long transmission lines on a 5G tower on a transmission line, down hundreds of miles of transmission lines through country that you don't want people driving on? So, certainly from a safety perspective in energy, it's huge From a physical security perspective being able to inspect substations to look for breaches in physical security, to watch for physical security threats.
Greg Sisson:We've had a number of instances around the country where we had people shooting at substations. So we can use drones and other autonomous means vehicles to do that to protect those assets without having to put humans there. I think it's amazing Other ways to use artificial intelligence I mean we're seeing a lot of energy companies and other thinking of new ways to use machine learning and artificial intelligence algorithms to do all sorts of things in energy around resiliency, reliability, understanding vulnerabilities in the electrical grid and those kinds of things. So, yeah, there's huge benefits to the energy sector as well as other critical infrastructure sectors things.
Pamela Isom:So, yeah, there's huge benefits to the energy sector as well as other critical infrastructure sectors. Yeah, okay, so we were talking about governance, risk and compliance, grc earlier, and so I just want to probe a little bit more or discuss a little bit more how we see those programs evolving, particularly, I know we talked about the CISO and the CISO's role and responsibility, but what about the GRC programs? Do you think that they are evolving at the pace that they should be in this AI era, or what's your take?
Greg Sisson:This as well. I mean, it's really organization dependent. It's how much the leaders in the organization want to take the initiative to make sure that they're using artificial intelligence to advance their governance, risk and compliance programs. I would absolutely use AI and GPT generative AI tools to write policies, and when I say that I don't mean to write the policy and then take it straight out of the tool and issue it to an organization, but certainly I would have loved to have had those kinds of tools to at least draft the policies and then give it to a team of people. At least give them a head start, give them a warm start to developing a policy.
Greg Sisson:I mean it's amazing, I've had some people giving me examples of policies that were generated from an AI-based tool and they're pretty darn well done. They don't take a lot to then take your experts and understanding of your mission and how that policy applies to your organization and making some small tweaks to it. What an amazing savings in time and resources and being able to use that to do policy generation. So I think, from a policy perspective in that part of GRC, I think it's amazing and I think we should all take advantage of it from that perspective, I think the other thing that we talked about earlier was just going back.
Greg Sisson:I think that it's very important to go back and review your governance process and your compliance process and those things Training and awareness falls underneath there as well, which we talked about. But I think it's just important for everybody to go back and review all of those things to really understand whether or not they cover all of the risks and the benefits that artificial intelligence introduces to the organization.
Pamela Isom:I agree. I think that the continuous monitoring of any material that is generated for us via AI technologies. There needs to be humans in the loop, and I'm going to come back to that, but there needs to be the humans in the loop, and not only that, but the continuous monitoring because of the fact that situations change. So a policy that's effective, it's reviewed by the ethics team, for instance, for ethical assurances, and the risk management teams for risk mitigation purposes. That's a good thing, but that should be a part of that process. The problem with that is I'm getting complaints that humans slow things down. Those are discussions that I've been involved with. I've literally been in debates, right, because people are like humans are starting to slow things down.
Pamela Isom:What I always say is that may be true, humans are also biased. Not everyone is introducing harmful bias, and that's why we talk about diversity of opinions, the diversified stakeholders, diversity of opinions, diversity of perspectives, diversity of opinions, diversity of perspectives. So I think it's important to break the collateral that's generated by AI into groups and determine what use cases make the most sense for AI. I think a policy like what you described is a good example, because it just literally helps with the development of the policies and then you just have those checks and balances in place. But I tell you, I was in an interesting discussion this week where one of the people were just saying as long as we keep humans in the middle of it, it's going to slow things down, because humans are biased in themselves, and I again re-emphasize the need for it. That's why you have diversity of opinions, diversity of perspectives.
Pamela Isom:I even go back to the example that we mentioned earlier about the automation of the drones that are looking at the utility lines and the fencing and the perimeters and everything for safety to keep humans out of harm's way. There still needs to be a human in there somewhere, and my response to that person was I know that we still need to get to the place where automation is, where there's minimum human interjection, but there has to be for many reasons. If the AI is building the rules for itself, if it's starting to decide on itself what needs to be done, humans need to be able to shut that thing down if it goes off. So you cannot take humans out of the loop in its entirety. But those are discussions that we're starting to have because there's concerns.
Greg Sisson:I'm so glad you teed this up, because I did want to go back to the human element and I was so glad when I was reading the policies and things that are coming out from the White House, that are coming out from the Department of Energy and talking about responsible use. They're talking about responsible AI, they're talking about the development of these learning models and you were talking about biases.
Greg Sisson:I mean, unfortunately, those biases can be taken into learning models. The artificial intelligence starts to develop those biases, starts to develop those biases, and so I think it's so important that we look at this across a very diverse spectrum, that we really understand the roots of how AI performs and how it learns, and making sure that we have very, very strict controls and regulatory oversight into the development of those learning models so that we don't allow the wrong biases to be interjected into those and then we pay for that later on down the line and I may be mixing up something, but you can probably expand on that a little bit but I think the human element part of the training of those learning models is, and the biases that could be introduced is an important discussion.
Pamela Isom:It is. You got it the easiest thing to do and it's not easy, by the way, but the most straightforward thing to do is to understand where you want second and third opinions, second and third points of view. So, independent assessments, secondary points of view, secondary perspectives that is what we need, because that independent evaluation helps with the understanding. Everyone has biases. The independent perspective says, well, did you consider diverse situations? Did you consider this, did you consider that? So that brings that perspective and you can automate that. You can automate it.
Pamela Isom:The thing for us to do is to be stewards about when does automation make sense? Also, be good stewards about AI. What use cases make the most sense for AI? So, what are those risk categories? So, starting to get an understanding from organization perspective, what are the risk categories for AI use cases? What makes the most sense? What use cases fit within those risk categories for my business? And so that's what we do, right, we're trying to help organizations understand this. You've got to understand that. And, speaking of that, I was looking at the report that was published by the Department of Energy and they speak to different risk categories. So they did this AI summary assessment report. They created this report, which is based on the executive order on artificial intelligence. They have four risk categories and one of the risk categories was compromise of the AI software supply chain. This sounds like you Because when we work together at Energy and I know it's been a minute, but I remember things- I remember good experiences.
Pamela Isom:So we always talked about the supply chain and the supply chain risk management. I'm very particular about the data and the supply chain source models. That's a supply chain, that's a part of that supply chain that's very vulnerable. So I noticed that they had this risk category number four compromise of the AI software supply chain and what they look at in addition to the supply chain is the critical infrastructure and cybersecurity risk associated with critical infrastructure and the cybersecurity risks associated with critical infrastructure like energy. I wanted to know and get your perspective on the executive order and then that report what's your take on the supply chain management in the era of AI and cybersecurity?
Greg Sisson:I think it's analogous to the work that we're doing around a software bill of materials, around SBOM and just really understanding. I mean it goes back to the development of these language models and those sorts of things and how we're protecting those language models as it goes to the supply chain and the development chain and how are we protecting those from being compromised. One thing in a language model, but if an adversary is able to access that language model and change it from what it was originally intended, then we've got some serious issues from a risk and safety perspective. I think it goes back to software bill of materials. If it's a bill of materials, it's a supply chain risk mitigation around the language models and how these AI tools are developed and how we're putting controls in place to make sure that what it was intended for when it was released by the developer, as it's taken through the development lifecycle and actually put into the procurement lifecycle, how are we protecting those critical elements to make sure that they're not changed in any way for malicious purposes?
Greg Sisson:That's my perspective on it. I mean, there's much more to come from a supply chain perspective, but I do think that's the most important part of it and that's my understanding of it.
Pamela Isom:And one of the things and I agree with you, I'm with you a thousand percent. So one of the things I'm thinking about as we're talking about this is I have solar in my home, so I have the traditional solar panels on my home and we have a device where I can monitor the consumption. And how much were you pulling from the grid versus how much we're pushing to the grid In the day and time of AI? There's some AI in there getting me this real time information and making predictions for us and how to better do some things, so that we're pulling more from the sun and less from the grid.
Pamela Isom:And so, in the day and time of digital transformation, cybersecurity, adversarial AI the adversary could look at exploiting my process in order to get to many user accounts at the utility provider, and so I think these are the types of things that one might think is way out there and unlikely, but that we have to guard against. And so I think, even without thinking about it, if you go back to the cybersecurity fundamentals, be careful about your passwords, be careful about where you're leaving information. Cybersecurity, basic hygiene will help protect instances like that, but in the day and time of AI, this type of information is vulnerable. I think it's a part of the threat vectors for adversarial AI, so just wanted to add that.
Pamela Isom:Yeah, you're spot on wanted to add that, yeah, you're spot on. Okay, great, yeah, keep me straight. Okay. So now I want to know. We're in the portion of the discussion where we're about to wrap up. I want to know, first of all is there anything else that you wanted to talk about to discuss? So let me ask you that first.
Greg Sisson:I don't want to get into.
Greg Sisson:So there is one other point.
Greg Sisson:So I don't want to get into the scary part of AI, but I do think that organizations it's part of their training and awareness for their workforce is to really understand how an adversary can use AI, because it is important.
Greg Sisson:I think that's an important way to train your people to understand the risks that they could introduce to the organization by making them understand how an adversary could use AI, especially when it comes to using artificial intelligence to do open source intelligence gathering and social engineering, and how much more effective a phishing attack could be if an adversary used AI tools to do that social engineering and that campaign development for phishing.
Greg Sisson:So I think that's important for people to understand. And it's also important going back to the discussion around training people around AI and how deepfake videos can erode trust and how people inside the organization can recognize malicious deepfake videos and how those videos could be used to impersonate not only political figures in the country but also executives and other people inside their organization to get them to do something malicious like transfer money or do things like that. So I just think the adversarial part of AI and the threat part of it is very important for people to integrate into their training and awareness program for their organizations. So that was the last point important for people to integrate into their training and awareness program for their organizations.
Pamela Isom:So that was the last point. I think that's an excellent point. I'm going to reach out to you outside of this and share some things with you as far as what I'm doing with my training programs, so you can see the adversarial components, because, you're right, you want to balance the good and the adversarial components, but you don't want people to be in the dark. So I'm going to get with you and share a few things with you and then let's see I'm going to see if you have some perspectives, because I know you do. As we wrap up here, do you have any words of wisdom or experiences that you'd like to share with the listeners?
Greg Sisson:I do and I thought about this a little bit. I think it's just really the basics that we kicked off with. It's knowing your mission, knowing the organization, understanding the benefits and the risks, communicating those to the people in the organization and working with stakeholders and collaborating with stakeholders. This is a common thing that are talked about with security professionals is don't be the office of no N-O, but instead be the office of K-N-O-W, Be the office of no. Take the time to educate yourself and your staff on how artificial intelligence works, how it can be used for good, but also understand it from the same perspective of how it can introduce risks and then communicate those and train the people in the organization. But I think that's my parting words.
Pamela Isom:That's pretty cool. So don't be the opposite of no, but be the opposite of K-N-O-W and make sure you stay collaborative. That is just powerful, and I do think that I really appreciate the fact that you pointed out that we want to focus on also the benefits and don't make it a tool that is not permitted within the organization, because you see the stats, I see the stats out there, people are using generative AI. There's numbers that came out this morning. Even organizations and workers within organizations are using AI, whether the leaders approve it or not, even if they have to put it on their personal devices, but they're using it. And how are you going to retain staff if you're blocking the use of tools that they need to use? I appreciate it. I really want to thank you for taking the time to talk to me today and for participating in this podcast effort and for all your support that you have provided and that you continue to provide. I appreciate you very much, so, and I want to thank you for just being here.
Greg Sisson:Appreciate you too, and our friendship goes back a number of years, and so the help you gave me when we were thinking about starting a company. I absolutely appreciate your guidance and your wisdom, and thanks for inviting me today. I appreciate it. I enjoyed it. The time flew, so it must have been a good conversation.
Pamela Isom:A good conversation.