AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E003 - AI or Not - Julie Schroeder and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
Prepare to be enthralled by the profound insights of Julie Schroeder, a
trailblazing Chief Legal Officer, as she shares her riveting transformation from
trial and appellate lawyer to an advisor for tech titans who happens to also be a
lawyer. Her pioneering efforts in healthcare IT and natural language processing
predate the AI boom by decades, setting the stage for a deep dive into the world
of data and expertise needed for of most companies to properly use and protect
data on a global level, as well as AI/ ML and cybersecurity. Julie’s narrative is not
just her own; the story of a digital landscape in constant flux, where
cybersecurity emerges as a critical linchpin for business continuity and the
protection of our most sensitive data.
Our conversation with Julie doesn't shy away from the hard-hitting challenges and ample opportunities that cybersecurity presents. Together, we explore the pressing need for specialized education and the integration of data and AI into cybersecurity strategies. Security breaches aren't just about data theft; they're a sobering reminder of our vulnerabilities in an interconnected world. Julie's expertise brings to light the critical role of cybersecurity in safeguarding our digital transformation journey and the ethical implications that come with stewarding vast amounts of data.
As we navigate the complex terrain of AI, Julie imparts wisdom on the ethical use of data and the potential pitfalls of over-reliance on algorithms. The balance between human oversight and technological advancement is a delicate dance, one that requires constant vigilance and education. In this episode, we not only dissect the intricacies of AI's role in cybersecurity but also confront the broader questions of data ownership and security. Whether you're a professional in the tech field or simply captivated by the evolving digital age, Julie's expertise offers a valuable compass for steering through the ever-changing cybersecurity landscape.
Pamela Isom: 0:00
This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host views may not be those of the host.
Pamela Isom: 0:33
Hello and welcome to AI or Not, a podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and your digital transformation journey. In your artificial intelligence and your digital transformation journey. I'm Pam Isom, I'm the host of the show and we have a special guest with us today, julie Schroeder, and Julie is an accomplished chief legal officer. We've become really good friends, I would say. She's a chief legal officer, general counsel and business advisor in AI, ml, cybersecurity and data, and she can tell you a whole lot more about herself. So, julie, welcome to AI or Not, and I'm going to hand it over to you to tell us more about yourself. Tell us about your career, your experiences, anything you want us to know as far as you and your career journey and where you are today.
Julie Schroeder: 1:30
Sure, wow, that's a big question. So I remember when I met you and I was describing my career, I called it Forrest Gump's box of chocolates kind of beginning. I was a trial lawyer for 10 years down in big firms that no longer exist. One was called Harry and Simon, and if you ever watched Julia Roberts and the Pelican Brief walk through the evil law firm, that was the lobby of Harry and Simon. That's all that's left of it.
Julie Schroeder: 1:57
And I actually did try cases. I did hundreds of depositions, I was in front of juries, I was in front of judges. I didn't just do them in DC. I had one of the largest racial discrimination class action cases. I got into Supreme Court cases by accident because some partners that I used to work with worked with the AG, and so that brought me into Bush versus Gore, which I know sounds crazy. Me into Bush versus Gore, which I know sounds crazy. What's crazier still is, when you have a Supreme Court case, the questions presented are in an order of how strong you think the arguments are.
Julie Schroeder: 2:33
Equal protection was thought to be the no way in hell argument because you'd have to overturn 70 years of precedent, and so, sorry, that's what ended up happening. I worked on that one. I think Ruth Bader Ginsburg dissent says it all all 30 plus pages of it and after that I was trying to figure out what to do. Someone offered me a general counsel to look at a general counsel position. I hadn't really thought much about that. I never thought I would be qualified because the only people I saw in general counsel positions in 2007 were white men that came out of corporate M&A departments of large law firms and I said that. I said I don't think anyone is going to be remotely interested and I was told that there's no dearth of white men from those departments and to give it a try. So I did and I ended up loving that job. It was healthcare IT on a SaaS platform with natural language processing, with human in the loop and a sentiment analysis and yes, I just said 2007 and natural language processing and data and privacy, and that's how I got started. I remember one of the other things that you and I talked about was that back when we started, we don't have certificates because they didn't exist. We didn't have guidelines, we didn't have courses to take. When I was doing this, the only company I'm aware of that was also doing this type of computer science was Google, and that's because we were fighting over H-1B candidates from other countries. So that's how I started.
Julie Schroeder: 4:21
I have always looked at my career as an opportunity to learn, grow, be surrounded by people who are a lot smarter than I am, and continue on that journey, because there's always a new way to look at something, and the exposure to people who have expertise in that for me has been phenomenal. Also, their patience when I ask questions is very much appreciated. So, after pretty much 20 years now, when I first came onto the scene in 2023, because I hadn't been on LinkedIn I keep making the joke it looked like people thought AI spontaneously generated itself and it didn't exist before and everyone was suddenly an AI expert. They had no experience, but they're an AI expert. It became very hard to find a job in 2016.
Julie Schroeder: 5:26
That was different types of AI, with qual analytics again with human in the loop and sentiment analysis. That was sector agnostic, switching fields, and I said I wasn't. If you can't lock down your healthcare data with cyber security, you're not going to have a business, and that was always the thing that I was the most concerned about was actually cyber security To be a company killer. That brings me to today, where I try and talk to a lot of people, as you know, don't appear often not except us being friends aside. I think you are amazing, so thank you. I am very happy to be here today.
Pamela Isom: 6:15
I'm happy to have you Great time to get together and talk for a little bit. I would like to know more about your experiences with AI and cybersecurity. I know you started to go into it a little bit, but tell me more. And what do you see as that connectivity between AI and cyber?
Julie Schroeder: 6:34
At the time, it was an AI, ml, non-blockchain enabled solution that combined a lot of things that was thought to be like a chain of custody of your data, so you always know where it was and you could follow it through its path and if it got to somewhere it didn't want to be, you could use what's called a poison pill, or other words, destroy that data so it wasn't breached. That company was amazing. The backers and talent were fantastic. Unfortunately, it was just before COVID started company and when we started looking for fundraising in 2020, cybersecurity I got to say even though I cannot, for the life of me, understand it is not a hot topic now and it really wasn't in COVID Healthcare definitely got the push. I helped to start that company, to fund that company, to put the structure in place in the company and then give advice outside GC, and then we amended the bylaws to make sure I had certain authority as an independent contractor. But it was my first foray into people who just really knew cybersecurity and it was fantastic because I wasn't aware of all the ways that data could be vulnerable. Vulnerable. It really as scary as I was before that. It really opened my eyes up to a lot of the problems that exist in cybersecurity.
Julie Schroeder: 8:14
Some are newer. We both know AI has made it more difficult. We can talk about that, but even before people were using AI chatbots on farms to spread disinformation or what have you, which is actually a true example if you look it up in the newspaper, it was a big thing. That else that comes with it regulatory compliance contracts, dealing with global issues. I luckily had two decades to aggregate that knowledge, so I'm happy about that. I'll be more happy when that knowledge is valued, as opposed to having a certificate. Sorry to disappoint, but us old folks, we don't have them. There's a really good reason and maybe you want to look at them in a different way, but it's because we've been doing it before. It was publicly known, in fact, for 18 years people had no idea what I did with my career.
Pamela Isom: 9:26
Yeah, but certificates are good Knowledge is great.
Julie Schroeder: 9:29
Knowledge is fantastic. I mean depending on where the knowledge comes from. But generally speaking, knowledge is fantastic, but we have to call it what it is, which is abstract, theoretical knowledge for the most part, which doesn't mean that you don't parlay it or you don't have the experience, but if that's all you've got is abstract conceptual knowledge, you haven't been in a company solving these problems, looking at these issues, seeing how the parts interplay. It's really just a whole different ballgame.
Pamela Isom: 10:00
Yeah, that's what I think one of the challenges is with cybersecurity.
Pamela Isom: 10:06
I don't think we have the opportunities to get more experiential learning like we could, and I think there needs to be more attention paid to that.
Pamela Isom: 10:18
So, if we have to establish institutes, where that's all that we're doing and we get more institutes in place and now we add AI into the mix so that we're using AI not only for offensive purposes, but we're using AI for defensive purposes as well, when it comes to the cybersecurity piece, if we could do more of that.
Pamela Isom: 10:43
I mean, we talk about red teaming today, and red teaming is a big topic in the AI community, but if you really think about it, that needs to be automated. We need to automate that whole red teaming process because people are just going to get tired of trying to manually do these things. That's going to be required when you know that there's massive amounts of data and massive amounts of vulnerabilities because of AI. So when I hear people say, well, we need red team, well, I immediately start thinking about how we're going to automate this process so that the red teaming is effective, because otherwise people are going to get tired. People are going to get tired and they're not going to do the job that we need done when AI is in the middle of this situation.
Julie Schroeder: 11:30
Which is new to all of them, and I couldn't agree with you more. I think that's an excellent point. The other thing is red teaming isn't red teaming everywhere, so people who have been doing it more will look at, say, dns protocols and their subdomains. Newer people have never heard of it because they weren't around when the internet started and they don't know what it stands for and what we're going to talk about. Like IoT was not built with security in mind. It was just built as a simple translation device between an IP address and actually a namecom. But it's also, I think, responsible for 85% of all cyber security tax, and the current estimate on that I was looking at yesterday was, I believe we're at $8 trillion in losses.
Pamela Isom: 12:26
Yeah, IoT devices are a real vulnerability. So you know, with my background being at energy, IoT devices are used all over the place and we need them to help to fortify the grid and we need sensors. So we need lots of IoT devices, we need lots of sensors. And one of the things that I hear complaints about when I went to one of the public conferences a few months back, one of the complaints I heard was that, honestly, the minute they train up people in cybersecurity, they're not interested in learning about OT operational technologies and all that, so they lose resources. So there's a problem.
Pamela Isom: 13:12
Today, from what I heard which, of course, with my software engineering background, I'm like, OK, well, we're going to have to do something to fix this. So what can we do to help? So, what can we do to help? But what they were pointing out as the conferences, that was one of the biggest challenges. So they're having trouble retaining people because as soon as they get them trained up into cybersecurity proficiency where they can literally get where they've gotten, the experiential learning, et cetera they get snatched up. So, yeah, and that's a common problem, it is In the OT world in the cyber world in general.
Julie Schroeder: 13:51
Silo is what we've come back to talking about 18 million times, right? Yeah, most people look at information in silos. If you're doing cybersecurity, like you said, it's the same red teaming exercise. You don't need to do anything else or learn anything else. And something like the IoT. You don't even know what it stands for, forget what it does or how it can present a threat vector. But your background? You're really one up on that, because I don't think a lot of people have worked in an industry so dependent on sensors to make sure that everything's working automated and regulated, which is the point. The point of most of it is to be good. The problem with most of it is the same thing with DNS protocols. They weren't built with security in mind.
Pamela Isom: 14:42
They weren't built with security in mind. Yeah, so when I think about AI and I think about cyber together, I can't think of separating the two. It's like data. Whenever you think about data, you always think about how to protect the data, and the same way with the AI models. I don't care if it's a chat bot that I'm putting on my website, and I'm still thinking about how do I make sure that it's not vulnerable. And the same way with IoT and OT devices. So we have to start thinking about that more.
Pamela Isom: 15:15
So one of the things that we're trying to get to with this show is digital transformation at large, and what are some of the things that we need to be doing to deal with and be able to sustain digital disruption and digital transformation?
Pamela Isom: 15:34
And that's a big umbrella. So part of it is the cybersecurity, part of it is the data. Earlier today, I was talking to someone about the human factors, which I'm sure we're going to get into here, but in that discussion, it was more about energy the time and energy that we're spilling into unnecessary conversations and dialogue. That's very draining, and we need to be healthy in order to do our jobs effectively. Well, for this conversation, we're more into the technology and the cybersecurity side of things, unless you want to talk about some of the other human side of the equation, but I do want to know a little bit more about. You had some examples around fleet management, and digital transformation is all about transforming the systems and getting away from this old equipment that can't handle the challenges and disruptions of today. So what's your take on the older infrastructure, particularly around the hardware?
Julie Schroeder: 16:46
Around the firmware. And when it comes to this area and IoT and I think infrastructure is what scares me the most. Also, 80% of it is owned by private institutions. So when you start thinking about that and how you're going to possibly change it, it becomes difficult. We'll do the basics.
Julie Schroeder: 17:10
Iot, internet of Things that means devices, wearables, smart devices, phones some of the combination of both Devices could be your watch. It's helpful. It takes down all your health information, it tells you when to take your pills, it knows when you're asleep and it knows when you're awake, even though it's not Santa, and it has all of this information on you. And then what's protecting that from just getting out there? What happens if you're taking some type of tamoxifen or something that gives away that you've had a breast cancer problem and that gets out at large? So you came at it because they're necessary for your industry and you're like how do we make sure to protect it? I came at it because we started doing API call outs for data. They're not terrifically secure, so you had to really think about how to take care of that.
Julie Schroeder: 18:26
And IoT I'm just looking around my house Phone here, I've got my iPad, here I've got a smart device here. I have a Roomba that will assume it was a new one, so that Amazon bought it and put sensors on it so that when it goes around my house, it literally does this it sends information back about your house, what you have, how big it is. It's taking pictures of your house inside of your house. That's scary. Well it is. It's a lot of information. So if you're comfortable with that much information being collected about you, I think the thing that you need to absolutely be sure of is that that information is protected, and I remember this case, and I'm sure you probably will too, even though we haven't spoken about it. I think it was 2022 and millions of people were hacked because their Bluetooth enabled keys for their cars just unlocked.
Pamela Isom: 19:30
Yeah, yeah, I remember.
Julie Schroeder: 19:33
So these are the kinds of things. They're around you. It's not a question of putting them in your life, and they're not just around you, just around you. They're around everything that you depend on to get food, water, transportation, the supply chain, the IOTs involved. That's where we started talking about some of the more challenging aspects of it. Do you want me to give an example? Yeah, go ahead. So before the trucking fleet, we're going to talk about that in a second. I had written something in the summer about.
Julie Schroeder: 20:13
There have been hacks on our water supply Happened over the summer. There is a CISA emergency directive. There's a device called the Unitronics device firmware, manufactured in Israel, and the reason that CISA knew that it was going to be a problem is because it had been attacked by Iran and Israel already, so they already knew how to do it. Despite that, unitronics it manages water pressure in your water main. It's not like an incredibly sophisticated thing, but it is again connected, in this case by Bluetooth and sometimes by API, to the network, which, anytime that happens, opens yourself up for someone to take command and control of your system or at least insert malware in it such that it doesn't work the way it should and could potentially spread it to everyone you come in contact with. So it's like a phishing attack times a million.
Pamela Isom: 21:25
Yeah, take that situation because the water supply one would think that our water supply is not vulnerable and that no one cares about a device like what you just mentioned. Right, no one is really interested in the Bluetooth devices. Just make sure we get the water Instances. Two, there was one incident where the operator detected that the data had been tampered with and was able to see something before the attack became a issue. Thinking about that person and I'm thinking about what you just described, and I'm thinking about education and training and building capacity what would be your suggestion or your thoughts around what we could do to help pay more attention to things like this, because it doesn't always have to be this major catastrophe for us to become cyber stewards. So any thoughts around. That Is the issue that we just take this for granted. We just think that it won't happen to us. What do you think?
Julie Schroeder: 22:42
about that. In most of the infrastructure cases, as you know, the main problem was the unit was set to a default password. This was the case with all the electronics devices in the water supply. It will become a familiar theme as we go through a couple of other critical infrastructure challenges. But I mean the first thing is ensuring some kind of NIST compliance with standards for passwords. Very frustrating to me that NIST and CISA have all of this incredibly good information that they publish but no one is required to follow it. But you've got to start with the basics, which is you don't set your passcode to your ATM to 1234 for a reason. That should be the same thing if you're in charge of the water supply for all of Western Pennsylvania. That's one aspect. The same thing would be an open API or an SSID again set to a default password or a very easy to guess password. Those have been the culprits in pretty much everything that I've looked at.
Pamela Isom: 23:59
What is your take on that? My take on that is the same. We have to move away from thinking that's the responsibility of the system and take ownership and practice good cyber hygiene and cyber stewardship, and that we should all do it. And I also don't think that cyber vulnerabilities has a special piece of equipment, that it is most vulnerable. I think that we don't want to be thinking along those lines. We want to be prudent about all of our equipment, so all of the electronics, all of it. So that's kind of my take on it, and I believe that some of the fundamental issues can be covered in more education and training.
Pamela Isom: 24:43
We know that AI is going to amplify the risk, so we have to get some of these fundamentals out of the way and then continuously reiterate those points and try not to assume that just because it's a device, that maybe, maybe it's a digital pen and we think that no one is going to care about that because it's just a little pen, yeah, well, watch, yeah, the watch.
Pamela Isom: 25:06
I think we can watch out for more, maybe not, but it's all the same, is my point. So it's all the same. So we should treat it all as a potential vulnerability. I don't care what it is and would you leave your key? Would you tell other people which key is your house key and then let them know where to find it? That's why you don't use those default passwords, because everybody then knows pretty much what the default passwords are. They don't always know, but it's a little bit more vulnerable to just go with those instead of your unique passwords or even those that are generated by the system. But still, let's be careful, let's be wise, let's be prudent. So that's my take on that.
Julie Schroeder: 25:59
I agree with you completely.
Julie Schroeder: 26:01
It's worse in industries because they only tend to use one or two devices, devices and those devices in the case of the water supply, we're using the same, it's all the same and they are all using the same default passcode. It would be one thing if there was a lot of variety about, maybe, what's making sure that you know the pipes don't burst and the water supply is being regulated, but it doesn't work like that, industry by industry. There are two things that you've said people in my conversations have had a hard time understanding.
Julie Schroeder: 26:44
One is why AI makes cyber so much more difficult, makes cyber so much more difficult and I gave an example before, but you just mentioned that, so it'd be great to have you explain a little bit more about that and the other is the training, whether you're the person in charge of this device for the water supply, or the person in charge of the energy grid that covers Dallas, or the person who is in charge of making sure to architect an API for your company's confidential data, for your company's confidential data. That training I think you would have some thoughts on it being quite similar and the ways that it's necessary.
Pamela Isom: 27:43
Yeah, if I think about some of the training, I'll start with the training piece. The training, I believe, should be tabletop exercises. So tabletop exercises include a situation where a piece of equipment is vulnerable and what are we going to do to address this challenge? Make the tabletop exercise to where oh, by the way, one of the issues is a bot was involved, ai digital assistant was involved somehow, so that we get the experience working with the generative AI side of things, perhaps, or the conversational AI, but also that you have the experience of traditional cybersecurity incident response planning, business continuity planning. So they're getting some of the fundamentals. I would build in capabilities like that in my tabletop exercises so that we know when it happens, we are familiar and so the shock that comes along with this type of situation doesn't overtake what we need to do. And we do that with business continuity planning. A lot of times you don't have these type of exercises, so those are the kind of exercises that I would include. I would include incident response planning. I would actually include logging the issues in the AI database. So there should be an AI incident response database somewhere, or event response database for AI specifically. So for that component, I would make sure that it's recorded there. That's what I tell my customers. And then, as far as, what is it about AI that makes it amplify cybersecurity risks? It's just because of the fact that a error can travel a thousand times faster and hit way many more people or more situations way, way, way faster because of the capability of AI in itself. So that's why it can amplify and reach out further and reach out faster because of the power behind the AI, and also not only reach, but it can also then make inferences and then reach some more.
Pamela Isom: 29:53
So I don't know what you are telling your customers that are asking you that that's the beauty of AI. That's actually the beauty of AI. I know there's been times when we've had work and we had special projects where we just wanted to use AI not necessarily generative AI. We just wanted to use AI, ml, nlp to read the massive amounts of literature that's out there that we really can't go through as a human being. It'll just take too long. Or just think about how we use it to sort through facial recognition information or video information. Right, we use the AI to help us pass through that information and tell us what that information is. Now, we don't get into the misuse. That's why you use it, because it has that capacity. So that's what I would say to people that are asking those questions, and that this is a very critical part of the digital transformation journey.
Julie Schroeder: 30:54
It is a critical part and there's two sides to that. There's could be done by accident, but the ability to do it on purpose is really what is tending to cause a lot of the problems at the moment. Right, and in your training I loved what you said about that and that training, I think, should be case independent. So if you're working with the generative AI bots, then by all means, that's what you should be saying. Okay, in case of failure, what's the plan? What do we do? How do we backtrack? How do we find out where we gave bad information to? The same thing would be true if you're at votegov and find a Korean server certificate on your server, right, yeah, it's a completely different type of threat, but it's the same type of exercise.
Pamela Isom: 31:52
That's right. That's why I said, like some of the fundamentals, I love the fundamentals of cybersecurity because it's like a stepping stone for other types of threats and other types of actors, If we could remember. Now I think that there's opportunities always that make cyber better, but I still think some of those fundamentals, if we go back to the 800 controls, the controls are there. How we interpret them is another thing, but the controls are there and they're there for a purpose right. Even the level one even the level one is the CMC level one the basic controls can help keep us out of basic hygiene if we just pay attention to them. We have to pay attention to them and interpret them and then overlay or intertwine AI specific scenarios because, again, AI can help us with cybersecurity and help us to be more on the offensive side and also help us on the defensive side, but it can also be a vulnerability, Yep.
Julie Schroeder: 32:59
So Speaking of which, you want to talk about some vulnerabilities in trucks.
Pamela Isom: 33:05
Well, so we talked about the water supply and the truck situation that I wanted to go into. The truck situation is pretty much around the IoT that we've already talked about and the digital devices, so we probably don't need to go into that anymore, unless there's anything else you want to bring up about it.
Julie Schroeder: 33:26
Only to say that in this case, it's the ELDs, the electronic logging devices, and those ELDs were just installed to record the amount of time a driver was driving, but they are also controlled to the main control system. In addition to the default password problem and the Bluetooth problem, they have an open, exposed AI problem, which means these same devices with their default passwords and their vulnerabilities are in 14 million of the trucks that we have carrying goods and services every day in the United States. According to a study that was recently published by Colorado State, which is what I use for the basis of my post on this, it's supposed to be the weakest link in the supply chain, but just think of what that would mean. You've got all of these systems in some form or another in airplanes, in large cargo ships, in trucks, in the water supply, in energy, in healthcare, in food and manufacturing, in our vehicles, our vehicles. Life could change very quickly and very badly if we didn't start paying attention to doing something about it. And I don't know if you noticed, but the FBI boarded the Dolly, calling it a crime scene, and their investigation is working around their control system. I saw that, yeah, so, um, it's not hypothetical, it's not a small thing that you want to pay attention to. It is part of the bigger hygiene issue.
Julie Schroeder: 35:23
Another thing I think it might make a second to just take a beat and talk about is you and I connect data privacy, business systems technology, cybersecurity, ai together. Most people work in silos. They are a privacy lawyer, they are an IP lawyer. They are a cybersecurity person who works to make sure that healthcare company gets the high trust certificate in the SOC 2.2 audit and people don't tend to look around them at the interplay much like the IOT right the interplay of all these fields and how they impact each other. I came at it from the data side, wanted to make sure that there wasn't a healthcare data breach because under HIPAA and a startup, I was the most concerned at that point, even though cybersecurity was available in 2007. Having a breach that killed the company. I know you started with the IoT and technology and cyber and came to data and we both see it as the same thing. Maybe it would be helpful if you explained for you, based on your journey, which has been absolutely incredible, why it's so obviously the same thing.
Pamela Isom: 37:00
Because it's all data, it's data, so you have to protect data. Cybersecurity protects data. It's all data. Ai is a consumer of data. It's all data. So for me, the passion that I have around data I mean I have a passion around technology anyway and how technology works. I'm a software engineer by trade, right, so solve some problems using technology and take the burden off of me as a human being if it's effective For me, and what I tell my students and what I tell people that work with me is at the core of it is data and the data has to be protected.
Pamela Isom: 37:47
You protect the data and you protect the data. You don't have privacy issues because you're thinking about stewardship. You're thinking about governance. You're not just thinking cybersecurity, but you're thinking about stewardship and governance and therefore you're now protecting the data from a couple of ways. I also bring in ethics, and that's why you always hear me talk about ethics, because from an ethics perspective, you can still protect the data and have insights. And what are you going to do with those insights that you have? Where are your ethics?
Pamela Isom: 38:19
I'm big on cybersecurity ethics, data ethics and AI ethics. I will always differentiate those, because they're similar but they're different. Things Like what we're discussing now. They're similar, but they're unique in their own way. Cyber protects, and you know what I think about. I think about our troops and our veterans. They keep us safe. Cybersecurity keeps us safe. Our troops and our veterans have insights that we don't know that they have, and we trust them with that information. Right, our cybersecurity people. We have to trust them. We have to trust them. So I will always speak highly about cybersecurity. Why it's important. The data and then AI. For me, and what I teach is, ai is a consumer of the data. That's what it is. It's a consumer. That's how I see it. So we're going to run out of time here.
Julie Schroeder: 39:11
What I do want to know is let's talk real quick about unless there's anything else you want to talk about- I just wanted to say really quickly that ethics in AI is a question I wish someone would talk about more, so I'm glad you brought it up. We can talk about it more.
Pamela Isom: 39:32
Okay, so we'll save the ethics discussion for maybe a roundtable. So we'll look at what we want to do in an upcoming session. But what I want to know now is I think it would be good to talk about the human judgment and the human elements of digital transformation in an AI world. So we don't have a lot of time, but let's talk a little bit more about some of the concerns I've heard you express about human passing off responsibility not really taking responsibility, but shucking responsibility onto technology. What's your thoughts there?
Julie Schroeder: 40:14
I never learned another way to do this, so I'm very staid in the way that I have been doing things. I welcome any comments on it if you disagree, but part of the problem is you've got people talking about AI. It's a buzzword. You have people who want to hire for AI, but they don't understand it. In fact, it became known as AI, even though AI is not a thing.
Julie Schroeder: 40:43
It's a collection of things. It's a collection of tools. Each tool consumes data in a different way and if you're going to use it, you should know what data you're using it on. You should have done a lot of pre-training and understand what any potential bias might be, and then perhaps done some co-parenting. In order to make it a good data set, you have to mark your parametric date and then, when you're using it for a goal, you have to understand the limitations of your tool. You can't take a screwdriver and use it to take down a tree Not going to work. You need a chainsaw for that. Generative AI is fantastic in its imagination, the creative aspects of it, but its point was never to give you the right answer. It's mathematically, statistically, going to give you the most probabilistic correlated answer that it has in its database correlated answer that it has in its database.
Pamela Isom: 41:58
I think probabilistic and correlated is the right way to express that.
Julie Schroeder: 42:00
Yeah, and patterns, patterns and clusters, right, and so one would not use generative AI. Or I would not use generative AI. And thinking about my goal, if my goal is to diagnose patients who may or may not have cancer, that's not the tool I'm going to pick. It's not built to try and give you a predictive, correct answer on its data, which is still a limitation. It doesn't do well with nuance. I cannot pick up things like sarcasm. I think people would be better off if they started thinking of it as augmented intelligence instead of artificial.
Julie Schroeder: 42:45
There are some models out there that are quite amazing and I just think that, to sum that up, you have to figure out what is your goal. You have to look at what is your data, what are your data sources? What do they look like? How do they pan out? If you're going to use an AI tool, you pick the right AI tool for the goal. You make sure that the data is as balanced and correct I guess is the right word as one can hope, and even then, you're still going to get wrong answers. Yeah, because it's a machine, right, you account for that, but that's how I have always gone about thinking about how to build that system, by no means a software engineer. So your turn.
Pamela Isom: 43:33
I agree with all of that. The thing that I would add is that it's a machine. They just don't think they follow instructions. So the question becomes who's giving the machine the instruction, who's programming the algorithm? And how are they programming the algorithm and what type of information are they providing the algorithm so that it knows what to respond to and how to respond? But it's still about pattern matching. It's about clustering of data. It's about processing data. It's about looking for commonalities.
Julie Schroeder: 44:05
Your programmer adds another layer of bias that people need to think about.
Pamela Isom: 44:09
Exactly. I believe that in training and I keep mentioning training just because I have this concern that there's an opportunity to do more and to look at things more holistically when we're providing the training, but at the same time, we don't want to inundate folks. So I'm trying to strike the right balance with training from an entire life cycle perspective, even if we break it up into silos. Understand the life cycle, and I think that that will go a long ways with helping people understand. Because, do you know, people are losing sight of reality and it's like the discussion we were having last night. They are literally thinking you don't know if they really think this or not, but some are literally thinking that, because of the fact that the AI is responding in a certain way, that that's how it should go, that's how it should be. So if I take my car and GPS tells me, you're at a clip, so just go over, are you going to do it?
Julie Schroeder: 45:15
No, that's a great example and I think it requires, like you said, an understanding of let's call it spade a spade. I mean, this is great stuff. We're obviously both proponents of it. I think we counted together, we have over 50 years of experience between the two of us.
Julie Schroeder: 45:33
But people do think that they're being given the right answer and a great example of that gone very, very badly would be the UnitedHealthcare rehab case. Yeah, was a closed source, which means that no one knew the consumer using the data and responsible for that data had no idea what the data was, didn't know that there was a 99% and I'll use the more colloquial error rate to that data. Really wanted to automate the rehab benefits for geriatrics. I think that would be great because geriatrics wait too long to get answers Not saying it's a bad idea, geriatrics way too long to get answers Not saying it's a bad idea. I just wouldn't have used this particular data and this particular model to make those decisions. And then, even though they didn't design it, with a human in the loop, which means a human's looking at it to see if it's excuse my French, Crazy, you know, like going off of a cliff, people did comment and say I don't think that's right. And they were told to follow the algorithm, you must do what the algorithm says. And people died.
Pamela Isom: 46:42
That's why I think there needs to be more education around what we've been talking about here. One would think that that's not needed because one would think that we would know better than to trust a machine to that extent. But the more you hear this talk, I don't know if it's infatuation. I don't know.
Julie Schroeder: 47:00
Now it's sold. It is absolutely how it's sold, yeah, yeah. And there have been cases that I've read of major tech companies and I'm not going to pick on the one in specific, but it was a national news article that they were in front of a very big defense department and cherry picked all the data and all the answers in advance so it would look correct, and then told them it was.
Pamela Isom: 47:25
Well, I think, if we continue to bring more insights into, here are some of the things that are happening, because I do believe AI is good and we've already talked about some good use cases and there are some it's good for more than research. We were using AI before generative AI came on the scene. We're using it, so there are good use cases for it. So we'll emphasize that a little bit more. I think it is necessary for the digital transformation journey. It's not going to go away. We know that.
Pamela Isom: 47:53
But I do think that we would want to be selective as to how we use it and understand what use cases make the most sense. And then, once we agree that we're going to use it, what's our continuous monitoring process and what is our business continuity planning? What does that look like? What is our continuity planning in case something goes wrong that we've talked about here? And then, for critical infrastructure components, we know that we're going to have AI in there to help us out. So understand where and how we want to use it and does that make sense? And I would do the continuous testing and monitoring. So, are there parting words of wisdom that you would want to share with the listeners, or do you think you've shared enough?
Julie Schroeder: 48:38
Your parting words of wisdom implies that I have provided wisdom. All I can say is I want people, when they hear this, to think about the fact that I'm a lawyer who had absolutely no technical training whatsoever. In fact, when texting came about and IMN came about at my first company, someone said you T question mark and I wondered why they were asking about the University of Texas contract. Literally, that's how little I knew the legal stuff, the regulatory compliance, the open source licenses, the third party vendors, some of that. The IP came easier, but I did ended up doing the same thing, which is the data. What is the data that we want? What is the data we want to keep? What is the data we own? What's the data we should not own? What's the data we should not be using? What data do we give back? What rights do other people have? How do we keep it safe? And it brought me to the same place as you, and you have pretty much the exact opposite background.
Pamela Isom: 49:44
Okay, so your communications to the listeners is to pay attention to this podcast Number one. We keep on listening to and trying to understand the value proposition behind AI, make sure that you understand the use cases and make sure you understand the data and the travel of the data. I think that you said it perfectly, okay, well, I am so glad that you were able to join me and participate in this discussion, that you were able to join me and participate in this discussion. So this is one of the first few podcasts for the show AI or Not, and so you are one of the inaugural guests and it's just been a real honor and a pretty deep conversation actually. So I think it should be very valuable for those that are listening, and you've shared some really good considerations that we should take into account as we are moving forward in this era. So I want to thank you so much.
Julie Schroeder: 50:43
Thank you, and the honor is all mine.
Pamela Isom: 50:46
It was just great talking with you. You too, all right.