AI or Not

E012 - AI or Not - Joyce Hunter and Pamela Isom

Pamela Isom Season 1 Episode 12

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Can AI be a force for good in healthcare, education, and even climate change? Find out as we sit down with Joyce Hunter, Executive Director, ICIT (Institute for Critical Infrastructure Technology), a pioneering IT expert whose career trajectory took unexpected turns, leading her to become a thought leader in AI ethics. Joyce’s unique journey from sociology and marketing to data analysis at Hallmark and specialized training at Xerox showcases the unpredictable paths that can lead to groundbreaking work in technology. Her early fascination with AI, sparked by classic sci-fi films like "2001: A Space Odyssey" and TV shows like "Star Trek," serves as a compelling backdrop to our conversation on the real-world applications and ethical considerations of AI.

Explore the transformative power of AI across different sectors with us. In healthcare, Joyce explains how AI can revolutionize preventive care and speed up drug discovery by leveraging medical images and genetic data. We also discuss how AI can aid in disaster preparedness through environmental monitoring and satellite data, and how adaptive learning can create personalized educational experiences for diverse student needs. A real-life orthodontics example highlights the critical role of human accountability despite the advances in AI technology.

As we navigate the complexities of AI transparency, Joyce emphasizes the importance of using diverse and representative data to minimize bias and ensure fairness. Our discussion covers the necessity for regular data audits, transparent models, and the global standards for AI governance exemplified by the EU AI Act. Joyce offers valuable insights on overcoming imposter syndrome and seizing opportunities, leaving you with not only a deeper understanding of AI ethics but also inspiration to embrace new possibilities in your own career. "Live long and prosper" as you embark on this enlightening journey with us.

Pamela Isom:

This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host views may not be those of the host.

Pamela Isom:

Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. I am Pamela Isom, your podcast host. We have a delightful and special guest with us today, and that's Joyce Hunter. Joyce is Executive Director in Collaboration and Education. She's an advisory board chair, a project director, a CEO, an accomplished business leader and colleague. I'm so happy to have you here with us today, joyce. Welcome to AI or Not.

Joyce Hunter:

Thank you very much, Pamela. It's a great opportunity to be here. Thank you so much for inviting me to be on your program.

Pamela Isom:

You're welcome. It's great to have you. So tell me more about yourself, Joyce, your career journey, how you got to where you are today, and then tell me what caused you to dive into entrepreneurship.

Joyce Hunter:

I'm an accidental tourist through IT. It was not intended for me to be here. I have an undergraduate degree in sociology, a master's degree in marketing. My first job, my internship, was at Hallmark Cards putting together, designing cards best of the past, worst of the past. Doing the data and we didn't call it data analytics back then. But what we did was we analyzed the cards to see how many cards were sold, how many people bought those cards, what kind of demographic bought those cards, and then we would design the new season, like Halloween or Christmas, based on the data that we accumulated. So I thought, for sure I was just going to be in the back room accumulating data until I was relocated to Colorado Springs and there were no jobs available except in IT. So I took a class and they sent me to training, to the old Xerox training program for information technology professionals, and that's where I have been ever since.

Pamela Isom:

Wow, I didn't know you had some history in Colorado Springs. That's my home turf.

Joyce Hunter:

I didn't know that. Yes, yes, yes, yes, I worked for Digital Equipment Corporation. While I was out there, I first worked in Colorado Springs and then I went up to Denver.

Pamela Isom:

Wow, me too, me too. Oh, my gosh, we're going to have to talk outside of this podcast and compare notes. Yes, that's where I learned artificial intelligence in Ops 5, on the back In the alpha. Wow.

Joyce Hunter:

Oh my goodness, oh my goodness, oh my goodness. Wow, Not even six degrees of separation. Not even we were down the street from one another and I can tell you, I lived on San Pedro Court and our deck faced NORAD Cheyenne Mountain.

Pamela Isom:

Wow, I know exactly what you're talking about, so that's a good thing. Well, you have a very interesting background and I knew we connected for some reason, so there's many reasons why we have connected. I didn't know about the Hallmark experience, but it's always those rare situations that it seems like such a challenge at the time that ends up paving the way and setting some fundamentals in us to succeed in our careers, and so it sounds like your hallmark was the pivot for you, which is exciting. So tell me what caused you to get interested in artificial intelligence? I know I mentioned that I learned it back in my digital days, on the back and the alpha, by the way. So both of them and my interest wasn't sparked in AI until then, when they asked us did we want to take this class? And so I took a class and took it from there. But tell me what sparked your interest in AI and what's that journey like.

Joyce Hunter:

No, I guess I started way back with 2001 Space Odyssey. That was the first movie that got me into sci-fi. My dad actually took me to the movies because he was a sci-fi nut, my mother not so much, but he took me to see 2001 Space Odyssey and I have been in love with science fiction ever since. And then the Star Trek episode called M5, where the computer it was an artificially intelligent computer and it was supposed to be the M5 became self-aware because of the programming that was done by the original programmer of the M5. And when it became self-aware, or sentient being, it decided that humans were fallible and needed to be eliminated so that they could be peace on Earth. So I was absolutely fascinated about the programming. And why would somebody program a machine to have a mind? So I would say sci-fi was my interest in AI.

Pamela Isom:

I see, I see. So do you think it's sci-fi today?

Joyce Hunter:

No, that's a funny thing when I look at those old Star Trek episodes and I look at the communicator that's off the phone and a lot of things. If you look back at some of those old movies you see a lot of similarities to the technology that we are using today.

Pamela Isom:

Exactly, and so that makes me think about this whole. I don't want to go into it too much, but this fear that communities have around AI and fear that it's going to be like the Terminator and it's going to wipe us out and all this and that, and I just don't buy it, not one bit, and I just think that we're smarter than that. We are smarter than that to sit back and allow something like that to happen. And I get it. I know where folks are trying to come from with that and I know that with this race to the top competition that's going on, that it's possible. But humans, we are just smarter than that. We are not going to allow that to happen.

Joyce Hunter:

Yes.

Pamela Isom:

We are going to intercept, if at all possible. If we get fired from our jobs, so be it. We'll get another one, but we're going to raise the flag, we're going to raise the alarm sound the alarm Absolutely.

Joyce Hunter:

And the other thing and I know this was one of the issues that we were going to talk about, but why don't we go ahead and let the cat out of the bag, in that there are limitations of the current AI. It's data dependent. The old phrase garbage in, garbage out it still holds true for AI. So modern AI systems are heavily reliant on large data sets for training. They don't have an innate understanding or common sense and can fail when faced with situations not covered by what they were trained to do. So if we think that it's going to be like the M5 and go off the rails and cause chaos, that's not what these current systems do, and so they adaptability and flexibility. Whatever you put in is what's going to come out.

Pamela Isom:

Yeah, and that's why it's so important to pay attention to your data and the ecosystem that you're using that the models are going to choose from. So I've been in lots of conversations with clients around large language models, micro models, small language models. How do we go about taking steps to ensure that our models are going to perform as we expect? And the first thing that I discovered is they don't have any expectations. They don't have any expectations, no guardrails within the organization, no governance, and if it is governance, it's not the fundamentals that I always say that I'm trying to bring us back to. Those fundamentals are missing. Do we have policies? Or, if we're going to be bringing in these tools into the organization, do we have guidelines for the team so that they feel comfortable using these tools?

Pamela Isom:

Yes.

Pamela Isom:

Or are they going to now have to fear that if I use this, what are the repercussions? Because oh, by the way, I'm not exactly sure what's going to happen. But what's going to happen if something not so pleasant happens? So we have to have these complicated conversations that clients aren't having to the extent that's needed. What's your thinking on that?

Joyce Hunter:

Well, absolutely. I mean the ethical and moral reasoning. Ai does not have the ability to make ethical and moral decisions. That's why you have to have a multi-generational, a multi-factional team to put these algorithms together. You need the human resources person, you need the marketing person, you need the computer scientist. You cannot have somebody just put code into a system that you want to do something. So humans possess intuition, empathy, hopefully, and moral reasoning that's shaped by culture and experience. Ai can't replicate that Right. It just can't. So you need those multidisciplinary people. Multidisciplinary people. I heard that with the AI and with cybersecurity, that humanities majors make the best analysts, and that's because of the reasoning right. They think about the moral implications and the ethical implications. You get a philosophy major who thinks about all of these things. You get a sociology major, like me, and you talk about the study of people and what their reaction is going to be. So you know you have to have all of these different disciplines in the adoption and the creation of these AI models.

Pamela Isom:

I agree with that, I totally agree with that. And that goes to ethics, that goes to governance, that goes to data management as well as governance. I mean it covers it all.

Joyce Hunter:

And the data privacy laws, yeah, so if we're looking at regulatory frameworks and you're talking about the ethical AI guidelines, you have to have those and that goes to your governance, and you have to implement a robust data privacy law to protect the citizens' data while enabling AI and innovation. That's a fine line. That's a really fine line, but you have to have really good, dedicated people who are not willing to allow that line to be crossed.

Pamela Isom:

I agree with you. So, considering what you just said, what do you see as some benefits that AI can bring to society?

Joyce Hunter:

Oh boy, how much time do we have. Let's take health care for an example, since health care is in the news now. So AI could perhaps personalize medicine. I have been such an advocate for personalized medicine because I am sick and tired of being used as a guinea pig.

Pamela Isom:

I hear that.

Joyce Hunter:

They will try something on me because it worked on somebody else and it makes me sick. I'm done so. Ai can analyze genetic, environmental and lifestyle data all at the same time to create a personalized treatment plan. I think that's fabulous. That's one of the benefits Predictive analytics, early diagnosis, cancer. If you could really detect the disease early enough, utilizing the medical images and genetic information and patient records, we could lead to better preventive care. You don't want sick people, you want preventive keeping people from getting sick. Right now, we're treating the sickness, we're not preventing the sickness from happening. And if we can use AI algorithms to chart the path of where this person has been, you could probably prevent them from becoming a type 2 diabetic, I agree. And there's drug discovery too. I don't know exactly how it works, but I am sure that, taking all of the different compounds, you could significantly reduce the time and cost for development of these drugs, and maybe the drugs wouldn't cost so much for development of these drugs and maybe the drugs wouldn't cost so much.

Pamela Isom:

Wouldn't that be something? Oh my gosh, because the drugs are so expensive, which then leads to inequities because it's too expensive. That's a beautiful reason to use AI if it can help with that, which, by the way, I agree with. So, if I think about what you are talking about here, you're pointing out how AI would be beneficial in the healthcare sector. I know you see it cross-cutting, but you see significant benefits, you're saying in the healthcare sector, and do you feel like that? It has been tapped, like it could be, or do you think there's opportunities to do more?

Joyce Hunter:

Oh, there's still so many opportunities to do more. That's just one area. Of course, there's climate change, environmental monitoring, so that you can analyze. Ai could potentially analyze. We've got all these satellites up there. Why don't we take the satellite data and analyze it so that we can predict environmental changes? We can be. I don't know if this is possible, but this is me thinking. We can maybe predict national disasters or natural disasters Instead of waiting for the tornado alarm to sound. We can see it coming from the satellites, yeah, and then we get better prepared for disaster response and mitigation. Ai and climate change. In the satellite they could have seen Katrina coming. Yeah, I know, I know and could have been better prepared to help the people get out faster sooner. Better Sustainable practices sooner. Better Sustainable practices. Renewable energy resources being a former energy person yourself, you know about rural development and the potential for renewable energy sources.

Pamela Isom:

Yes.

Joyce Hunter:

And then, of course, there's education. Education, just like health care. You could use AI to do adaptive learning, which means that you can have customized learning for the student. A lot of these classes have 28 students in a room and everybody's learning differently, and some are failing and some are thriving. Well, you can eliminate that because of adaptive learning. And then, even though they have their own individual lesson plans, it'll be up to the instructor to bring it all back together, just like we do in our breakout sessions. At our meetings, everybody goes off to their little groups. We discuss, discuss, discuss, and then everybody comes back together and talk about what we learned.

Pamela Isom:

I like that a lot. I like everything that you said. Agree with you. If you think about the adaptive learning, I think that's an area that's not been tapped, like it could be, because if we think about what's going on today, we have all these different types of tools that we could use and those tools we can even align to how we learn.

Pamela Isom:

Maybe some people don't prefer to look at the devices to learn. Maybe it's annoying to them, maybe they have something that's annoying to them. I know people that have issues with the movement of things and it bothers them and it distracts them from learning, and so there's not enough attention, I don't think, to help us to understand, which I think is a good use case for AI to help us to understand what the learning opportunities are for the diversity of humanity. So I think that's a right use case, and you call it adaptive learning and you describe it in a different way, but it still gets to what I'm saying here and that we want to start to better understand how we learn and the AI and the algorithms, because of the predictive nature, can help to predict patterns that would be more beneficial based on a style. Patterns that would be more beneficial based on a style, yes, yes, and you still have to be careful with that because you don't want to then start to stereotype or anything like that. But that's where the governance comes in.

Pamela Isom:

So, yeah, I was thinking about my dentist. So here's a real life example. So cause you know, I like to use myself as an example many times my orthodontist undergoing treatment now, and they did an image of my mouth before the treatment plan. And then I'm starting to see now, over the years my teeth had shifted, so they had started to shift a little bit to the left, which caused the gap in my front teeth to widen some. So I said I don't like this, I don't like it, I probably shouldn't care, but I do. So I want to know what you can do. So the dentist showed me they use AI, they use digital twinning and simulations. So they did an image of me in the current state and then they did an image of me after running the algorithms, and then they came up with a treatment plan and so the treatment plan I'm about I don't know three quarters of the way through the plan, and so the treatment plan I'm about I don't know three quarters of the way through the plan, and I can see that there was a tooth that went crooked and it then caused everything else to shift. You know how our teeth are and you have no control over them, and so you have no control. And so what they did was they started the treatments and things, and so I can see the difference.

Pamela Isom:

But let me tell you what I said. I recognize the models. I recognize that algorithms are being used. They took pictures, so I recognize computer visioning. I recognize it all. But you know what I did. That doctor is accountable. I don't care how much that technology says, this is how it's going to look after 15 weeks or whatever. That doctor is accountable and it better work. And here's where I think that organizations I said all that this is where I think organizations need to be is be responsible, be accountable and don't blame AI.

Pamela Isom:

Don't even start blaming AI yes you don't have your governance processes in place. That's what you blame. That's where you put the blame and you do something about it. If your chatbot is offering a service that you don't even provide, that's a contract that you just arranged via a chatbot, but it was you behind the scenes. So I think we should be very careful with how we use the tools, although I'm very excited about them. What made me think of that is your conversation around healthcare, your conversation around being adaptive.

Joyce Hunter:

We had a little bit of talk about governance and policies and ensuring some fundamentals are in place, and so that made me think of that, and so that's why I shared that example truly believe that before you even think about putting in an AI solution, you've got to develop the guardrails or the framework around it, the governance around it. That way you know what you can do and what you can't do or what you should not do. And I wholeheartedly agree that people will, if they think they can get out of it. Well, it was the AI model. Well, you have a mind, you can look at the AI model, and I'm a big believer in instinct. If your instinct tells you that this just doesn't feel right, question the model. Don't just say that because the computer says it or because you don't know what kind of data was put into the computer. Like I said before, garbage in, garbage out.

Pamela Isom:

Isn't that like going back to the when we first started our careers, and when we I mean I'm a J2EE developer and from the Agile days and we talked about this then I mean from J2EE and Java it was garbage in, garbage out. And so it's these same fundamentals that are carrying forward that, if we could just remember it, would help us be so effective in this digital transformation world.

Joyce Hunter:

Absolutely right. You're absolutely right and I think, as far as the efficiency, it's okay to want to do things fast, but you have to be efficient. Doing things fast does not mean it's right. Take the time to make sure that the data that's used to train these AI models is diverse and representative by all segments of the population. This helps minimize bias and making the AI fairer across different groups. You got to do those regular data audits. Yeah, if you don't do them to identify and mitigate biases. You got to have the transparent and explainable AI. This is my 10th year of the summer camp that I run for underserved and underrepresented kids. I teach them data science, the data science behind athletics, energy, athletics, energy, agriculture and we're going to add a fourth one, which is architecture, and we're going to glom that onto the agriculture piece. I tell them you do all the research you want, but you better be able to explain the data.

Pamela Isom:

I hear that.

Joyce Hunter:

Don't stand up there and say, well, these are the numbers, why should I care? And said, well, these are the numbers, why should I care? I asked the kids, every single team that gets up in front of me and presents their information. I said, okay, now tell me why I should care. Why should I care about what you said? What effect is this going to have on my life? How is this going to make my life better, my family life's better? But the same thing with AI. If you can't explain it, go back to the drawing board.

Pamela Isom:

That's true, and the workforce development came out of what you just said. So, workforce development and preparing for the skills that are needed, because those skills that you just described are needed more so now than ever, and the ability to be able to understand and explain the outcomes and the rationale, not just from an AI perspective but from a survivability perspective got to know how to do these, because a lot of times, organizations are going to lean on the tools to generate the basics, but those analytical skills, a tool is not going to explain to me why something is like it is. It's going to give me the data points and this is the outcome. It's not typically going to explain to me why that's why it's important, typically going to explain to me why that's why it's important. I know we've said that the AI needs to be more explainable and that there's a big focus on that, which I agree with.

Joyce Hunter:

Yes.

Pamela Isom:

We need it to be more explainable, we need it to be more transparent, but we aren't there yet. No, and even if we had all of that, even if we had really good model cards, even if we had the good data cards and the model cards, you still got to have someone explaining what that means, still got to have it. And then for people in our youth you made me think of our youth because of what you said those skills that's not something that a computer can do for us.

Joyce Hunter:

No, no. And you have to be able to articulate yourself. You have to provide those clear explanations of how the AI works, just creating it and saying, oh, look at this thing that I created, isn't it wonderful? Well, like I said, why should I care? Why should I care? What's this going to do for me? What is the total cost of ownership? I was just given a demo this afternoon of a. I like it. I mean, I I'm hard to convince, but this tool is absolutely wonderful.

Joyce Hunter:

What it will do is it will take all the demystification out of USA Jobs, uh-huh. It will take what it would take a human resources person 60 to 90 days to go through 100 applications for a job. It'll do it in 30 to 60 minutes.

Pamela Isom:

I am trying my best to not interject, but I'm so agreeing with you.

Pamela Isom:

I just know that we're recording and so I don't want to interject, but with some things that I normally do, so that's why I'm just pausing here. But I swear, I mean, that is just spot on, that is just so true, and there are so many use cases, I almost feel like we should take the time out. I love workshops, put it this way, I love workshops that I have with my clients, where we are literally going through simple use cases and then hear them go oh yeah, I didn't really think of that, and we need more of that, because I think that these tools are for these humongous and complex tasks which they are but there are so many low risk opportunities to use these tools that we don't really think about. I love the example that you shared, because that made me think of one of those examples, particularly around processing tons of information that we normally just get frustrated with and we're either like, uh-uh, it's too complicated, so I'm not going to even apply, because I don't know what they're looking for Exactly.

Joyce Hunter:

They get a job offering and I can almost guarantee that if they get a hundred applications, how much do you want to bet? The last 25 don't really get the attention from a person. So if you could put all of those 100 into the funnel and then have it go through the funnel and sift it and then just give you the top 10, then you can pay attention on those top 10 to make sure that you are getting the person that you want. And the last number of data analysts and data scientists and people in this field is about 100,000 in the government alone. I mean, that's just the government that's looking for 100,000 people.

Pamela Isom:

Yeah, it's a big number.

Joyce Hunter:

By the time you apply to USA Jobs, by the time they get through it, it's six months and they have already gone and gotten a job someplace else and you've messed up your opportunity to get this really bright person.

Pamela Isom:

It's a serious thing. I'm laughing, but it's a serious thing. People need jobs, people need work. It's a very serious thing.

Joyce Hunter:

So I'm not mentioning the product or the company, because I think it's wonderful and I hope, keeping my fingers crossed that they could get to the right people so that they can really share what this particular tool does. It is amazing.

Pamela Isom:

Yeah, well, that's good. Well, that's good to hear. So, considering all of that, is there a myth that you think is associated with AI that you would like to debunk or you would like us to discuss? You would like us to discuss.

Joyce Hunter:

Oh my gosh. Well, the AI is going to take over the world. Remember those old movies world dominance, the bad guy would come. All those Marvel movies, you know, there's always one big bad guy who insists that they're going to take over the world. And that's what people think about AI. They think artificial intelligence is going to be this big thing and it's going to take over the world and shut down all the electricity and we're not going to have heat, water or food, and that's such a myth. Like I said, ai can't think on its own. Okay, it has to be programmed by people.

Joyce Hunter:

Now you can have some crazy people out here here that could potentially program the machines to do some nefarious things, but, like what we're seeing with these ransomware, those are AI tools, those are bots that are hidden within the machines itself before they get shipped. Or else take MGM, for example. I don't know why they didn't do network segmentation, but that's an argument for another day. But they got in through customer support. Yeah, somebody called customer support and pretended to be somebody else, an employee. Mm-hmm Did some deep fake AI with their voice, yep, and got through and had the password changed, and that's how they got in and shut everything down.

Pamela Isom:

Which makes me think about how we should look more. So you know that I mix AI and cybersecurity. I blend the two together and that's why, for reasons that you just described. But I think about the situation where people have been sending funds to nefarious folks and sending funds to avatars because they thought it was a real human being. So it makes me think that we want to learn how to determine whether the websites that you're communicating with, the avatars that you're communicating with, know how to determine whether that's authentic or not, because it's nothing wrong with having a deep fake.

Pamela Isom:

Yes, it's how it's used. So I always say the example of if they want to do a deep fake of me talking to you and translating it to another language for another country, I would welcome that idea. But one did they ask for my permission and get my authorization? And then number two is what it's saying? What I'm saying? Is it interpreting the information properly? So there are good examples of deep fakes, but what we have to understand is how do you know when it is for adversarial behaviors? And typically that's what it's for, but not always. Yeah, that's true.

Joyce Hunter:

That is very, very true. Yes.

Pamela Isom:

Yeah, yeah.

Joyce Hunter:

We're going back to data protection again. So you have to have the laws in place like GDPR. Yeah, laws in place like GDPR. You got to implement the data protection regulations on ASI, similar to GDPR, to ensure privacy protections for individuals. I agree, that's key. You got to have it.

Pamela Isom:

And we ought to not think that GDPR is for some other country. Gdpr is for us, gdpr is for everyone and I think a lot of times we think that GDPR is for another country. It's for us, it's for everyone, it's an international legislation that we need to be paying attention to.

Joyce Hunter:

Yeah, Exactly, exactly, and we also have to make sure that the data that's collected by the AI systems is stored and processed in compliance with the national and international data sovereignty laws.

Pamela Isom:

Agree.

Joyce Hunter:

They came out with their EU for AI just recently and they just had a meeting last week talking about cybersecurity the international cybersecurity laws that have Department of State and CISA teaming up together for doing something overseas. I can't remember what it is, but I saw it on the newswire this week. Those are the kinds of things that we've got to be together on this, because there are people who are literally after us. Yes, and if we don't pull together as an international community, we can't go off and develop our own. There are over 200 pieces of legislation ready to be, and that's local, state and federal. I mean, everybody's trying to develop their own. Like you said, they're rushing to judgment. They're running around like chickens with their heads cut off, saying the sky is falling, the sky is falling. We've got to do something about this AI.

Joyce Hunter:

Now everybody's developing their own policies. What effect is that going to have if the local organization, if the local government, comes out with the policy or governance? What effect is that going to have when the federal government decides to come out with their policy?

Pamela Isom:

Yeah, nobody's talking yeah, silo, silo mentality Still going on in a global strategy. International strategy would be really good. An international strategy is exactly what we need. We need something like GDPR for AI.

Joyce Hunter:

We do.

Pamela Isom:

And for no country to say this is not relatively safe. The EU AI Act I love the one from the US I do. I love it. It does a good job of talking about the safeguards and respecting innovation and talking about privacy and all that. So I really like it.

Pamela Isom:

But the EU AI Act, as I've always said, I like it because it does a good job of helping one understand the risk types and the risk categories and if you're going to be looking at AI and using it within the organization, you have to start to lay out. Those are some of those fundamentals that you need to do, because you have to start laying out where is it acceptable for your organization in health care? Where is it acceptable? Because some would say it should not be used in health care at all, but that's not the case, because we gave some really good examples that my dentist situation is a good one and he is still accountable. And some would say don't use it for oral hygiene, don't use it for oral surgery or something. I don't have surgery, but anyway, some would say, don't do that. But it's okay If we can start coming up with international laws beyond HIPAA. So that's the things that we need.

Joyce Hunter:

Global standards and cooperation period.

Pamela Isom:

Global standards and cooperation.

Joyce Hunter:

We've got to work with international bodies to develop these global standards. We've got to work with international bodies to develop these global standards. Pam, pamela, standards and governance people run when you mentioned those two words. But you have to have these global standards and frameworks, which is? I am now referring to governance as frameworks, because people run, people go high, they put their fingers in their ears, they don't want to hear about governance, so I'm now calling it just frameworks. So we have to develop these global standards and frameworks for AI development and deployment. Yeah, not just developing it and throwing it over the fence, but you have to have smart deployment.

Pamela Isom:

I agree. Well, this has been an interesting discussion. I'm glad we were able to talk. I have one more closing question for you. Well, first before I ask that, is there anything else that you wanted to talk about before we get to the last parting words?

Joyce Hunter:

Let's talk about. Everybody's talking about displacement of the routine job. So AI automation. Of course it's expected to continue displacing repetitive jobs. Note, I said repetitive jobs. I would think it. I would hope that people would want to do something different after doing something for the last 20 years, the same way that this will create some new jobs. Jobs will be lost, new jobs will be created. The American public has to be flexible and adaptable. You have to have that adaptive mentality that you are willing to learn, to change, to do something different. Back in the old days, my dad used to use a tractor and things. As soon as they came out with the CAD system, he wanted to retire. It was that way of thinking and we're at that crossroads now where there are manual ways of doing things and now I can possibly change that way of doing things. I would be interested in figuring out how to work in the programming side so that I can make other jobs easier. The other question I ask what would you do that would improve how you work and the way you work?

Pamela Isom:

I agree that's a good one. Now is the time to really think about how to cultivate your experiences, considering that you have an AI companion. So what does that mean for me? My decision making, how do I get in command and stay in command? That's a skill and it's an elevation to one's career that I don't think we think about as well as we could and, rather than being intimidated, see it as an opportunity. Imagine having a position where you are managing the AI, the avatars, or you are managing a team and one of your teammates is a digital assistant. If you can add that to your resume, that's a lot more powerful than well. I am doing it the traditional way and I don't fool with AI and digital assistance. Yeah.

Joyce Hunter:

It's all about digital transformation. It is. It is Transforming your mind. Think differently. I always use a term when people want to start dragging their feet. I call it FUD Fear, uncertainty and doubt. Yeah, yeah, If you don't explain it, people are going to get scared because they don't know what they don't know. If they all they're going to do is listen to the science fiction people and nobody's going to come down and explain what the path is and how they could get from here to there and this is how the educational programs we're going to provide to you to help you get there they're going to be in their own heads and think about other things.

Pamela Isom:

I agree, I agree with that a lot, okay, so now. So now parting words. Do you have parting words of wisdom or experiences that you would like to leave with the listeners?

Joyce Hunter:

The door opens, walk through it. Don't get this imposter syndrome and think, oh, I'm too old, I'm too young. No, if I had that same mentality, I would still be at Hallmark Arts.

Pamela Isom:

That's good. That's a really good piece of advice. That is really good. It's just been wonderful having you on the show talking to you today. I think we could have went on and on, but we can't, so thank you so much for being here. We'll look at how we can get you back, but thank you so much for being here. Thank you for supporting this show and you just did a wonderful job. Your insights are invaluable.

Joyce Hunter:

Thank you very much for the invitation. I am humbled by you asking me to be on the show. I really enjoyed it and I look forward to working with you in the future at Mission Critical we're going to do it. We're going to do it, as I always end my talks. Live long and prosper.

Pamela Isom:

Thank you, you too.