AI or Not

E010 - AI or Not - Lindsey Wagner and Pamela Isom

Season 1 Episode 10


Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Can AI truly revolutionize the workplace, enhancing both compliance and operational efficiency? Lindsey Wagner joins us to unravel this intriguing question. Lindsey Wagner, Esq., AWI-CH, CIPP/US, is the Managing Partner of Wagner Legal, PC, an employment law firm who focuses her practice on AI Compliance in the Workplace, Data Privacy, Workplace Investigations. With her rich background in employment litigation, Lindsey takes us on a journey, shedding light on how AI tools are reshaping workplace practices, from adhering to discrimination laws to boosting productivity. We delve into real-world examples, illustrating AI's dual potential to drive positive outcomes and present complex challenges.

Our discussion takes a deeper turn as we explore the critical balance between leveraging AI and maintaining data privacy and confidentiality. We examine professionals' ethical responsibilities, especially in regulated fields like law. As a Certified Information Privacy Professional (CIPP/US), Lindsey shares insights on how smaller, specialized language models can offer enhanced control and compliance, helping businesses navigate the murky waters of AI integration without compromising sensitive information. The importance of vigilant data handling and the ethical use of AI in everyday business tools is underscored throughout our conversation.

Shifting gears, we emphasize the necessity for clear AI policies and comprehensive employee training programs. Lindsey draws parallels to the mandatory training brought about by the Me Too movement, highlighting that policies alone are not enough without proper education. From innovative sandbox environments for safe AI experimentation to the continuous need for upskilling, we discuss how businesses can stay ahead in the evolving AI landscape. This episode is packed with practical advice and thought-provoking insights, ensuring you're well-equipped to harness the power of AI responsibly in your organization.

 

Pamela Isom: 0:00

This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host. Views may not be those of the host. Hello and welcome to AI or Not.


Pamela Isom: 0:33

A podcast where business leaders from around the globe, share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. I'm Pamela Isom and I'm your podcast host. We have a special guest with us today Lindsay Wagner. Lindsay is an employment lawyer and AI integration specialist. We met in 2023, participated in a panel together in support of workplace investigations, which was very interesting, and we have since furthered our collaboration through some capacity building and supporting clients with coaching and training, particularly around HR matters. Lindsay, welcome to, ai or Not.


Lindsey Wagner: 1:22

Thank you, pam, so excited to be here to AI or not.


Pamela Isom: 1:28

It's good to have you. You're a good colleague and a good friend. I'm delighted to have you on the show. I'm going to start out by asking you to tell me more about yourself. You have a very interesting background, so tell me more about your career. How'd you get to where you are today and what's it like being an employment lawyer in the AI era?


Lindsey Wagner: 1:48

Yeah, good question, and always keeping you on your toes too. So my background I am an attorney licensed in Florida, ohio, california and New York. For the past 14 years my experience has been employment litigation matters, representing employees and employers, state and federal court and arbitration through litigation matters having to do with anything in the workplace Discrimination, harassment, retaliation, whistleblower, wage and hour issues and all of that. So that's been my background, and then over the past three or so years, I've really focused my practice to doing more advice and counsel away from litigation matters, working on workplace investigations, workplace advice and counsel for individuals in the entertainment industry, and then also fine-tuning my experience with AI integration, and so that's been something that I've been developing over the past three years.


Lindsey Wagner: 2:48

I started off on this journey not nearly as long as Pam as you've been there, but it feels like it's been a long time, at least from a legal perspective.


Lindsey Wagner: 2:57

Law catches up slower than a lot of other industries, and so in that capacity there weren't a ton of lawyers out there working on AI integration and especially forward thinking about how that's going to impact the workplace I started researching this issue and trying to get ahead as a forward thinking lawyer published an article in the ABA newsletter, the American Bar Association Labor and Employment newsletter, on AI and what we could look to in the future.


Lindsey Wagner: 3:26

That was, I think, in 2022. A couple of months later, chatgpt made its appearance into the world of the public and then everything changed and from then on, I've been having the opportunity to speak and write on AI integration and looking at the development of trends of legislation that we're seeing and the proliferation of AI use in the workplace and how that interplays with issues of confidentiality for trade secrets, for proprietary information, for employee information, privacy and just tools that are available for lawyers and business owners and executives and HR alike, and understanding how those tools can be integrated in a way that's not only ethical but also in line with current existing laws, like discrimination laws, identifying the fact of potential for discriminatory impact of algorithms in the workplace when they're making workplace decisions all that fun stuff. So that's my experience and current job in a nutshell.


Pamela Isom: 4:35

So has AI caused some of the things that you're doing to change, and if so, give me an example.


Lindsey Wagner: 4:45

As far as what I've been doing, because my focus has been so much considering AI integration.


Lindsey Wagner: 4:51

Now I've been looking more at workplace advice and counsel from a technology perspective, from a data perspective. So before I was more single, focused on just regulations from the Title VII or from Age Discrimination, employment Act as it relates to just your normal workplace practices hiring, firing but not using technology or those decisions being augmented or supported by technology. Now that we have this opportunity for all these AI tools that are available in the workplace plus also tools that can make your workplace more efficient, but they're developed or using Gen AI, it has a whole new meaning to how are we counseling our clients on confidentiality and confidential agreements and training your workforce to understand what are the implications of using those tools? What tools are you going to allow in the workplace? And so it's really a balancing act between compliance with current and future legislation, and also with wanting to have a more efficient business practice as well, and taking advantage of the tools that are available for the workplace in a way that's going to be legally sufficient and ethically minded.


Pamela Isom: 6:13

That makes sense. That makes sense Because, first of all, thank you for sharing your background and so, like I said, it's pretty interesting and it's actually intriguing. So thank you for sharing that. I think that it would be a change in the workplace and the change, like you said, because now you're not just dealing with the regulatory requirements, but now you're dealing with a tool that helps to either amplify issues or amplify good use cases, because that's what AI does it magnifies and amplifies. It does both, and so sometimes I have conversations with my clients, we always talk about what are the good use cases.


Pamela Isom: 7:01

I was in a meeting just the other day You'll find this comical, but we were talking about AI and we were speaking to some of the good use cases first, and so a good use case, like from an employment law perspective or even the entertainment field perspective, which is your two areas and so it's a good way to help when it comes to sorting through the briefs, for instance, or maybe starting to compile information for the brief, or just listening to some dialogue and capturing some key points for a brief or a key messaging that you want to capture. So we actually used it in our meeting it was a conference and we actually used it to summarize takeaways from the meeting, and so it was fascinating because we were all nervous in the end, what's this going to say? But when you're thinking about working with employees, you're thinking about HR. You have to be considerate of those. So we were at a conference. They looked at me and go look at your face, because I didn't know what it was going to pick up, because it was summarizing the different talks for the day and there was no proofreading beforehand. We said let's try it out. But I could see that in the HR space and in the legal space I could see that. But there's the good. So it helps you out. It helps with the manual tasks that you have to take care of.


Pamela Isom: 8:37

But then there's those risks. We then talked about some of the risks and someone said I just feel like going into a cave and just hiding. And we laughed about it because there's the two sides to the coin. So what you brought out, I literally heard this and what you said, I was paying attention to what you said. I literally heard like you want to be careful about some of the risks. I literally heard like you want to be careful about some of the risks and it can help you very much, you said, with some of the legislation and reading and processing the legislation, understanding what the legislation is saying or maybe summarizing some of the regulatory requirements. But also there is the concern that you don't want to do something that could be harmful or unethical. That's what I heard you say so, and you can always correct me if I don't play it back properly, but I usually will repeat back what I heard in my own speak.


Pamela Isom: 9:35

So it was funny when we had that meeting the other day because we were talking about that, so I just wanted to bring that up. So I appreciate you sharing and I love the fact that you're focused on integration. I used to think of software integration purely from the standpoint of a software integrator like myself. I'm a software engineer by trade, so I would think of AI integration purely from a technical perspective. I believe that what you are bringing out, which is very valuable, is that from a legal perspective, you have to really think about that integration, and more so with AI, because of the fact that it amplifies and magnifies. Is that correct?


Lindsey Wagner: 10:18

Yeah, absolutely. And the interesting part, pam, is a lot of people I talk to them about AI and they're like, well, I'm not really using AI though this is conversations isn't relevant and I'm like, of course you're using AI. If you have a computer, if you have a phone, you are using AI and you might not even realize, because now there's so many opportunities for developers, software companies, apps, whatever to have them powered in some part or way. As we're talking on our Zoom right now, I look down and there's a little button that says AI Companion. It is everywhere.


Lindsey Wagner: 10:52

So even there was a big article that came out recently about how Slack has now changed its policies somewhere in the most recent future to recognize that now it's having some gen AI integration and there's a possibility that it might be learning from conversations and so forth. Well, if you actually look back at the terms and conditions, they've likely been using AI in some capacity for quite some time. It's just now that they've specified we're using gen AI and now that it has that learning integration, there's a consideration, if you're a law firm, especially, or another regulated industry that has not only a concern about maintaining confidentiality from a business competitive advantage, but also from a regulated consideration For lawyers. We have an ethical obligation under our bar rules to keep our licenses active to make sure that we're maintaining client confidentiality. And so if you were using a communication app like Slack, it would have to be a consideration Not only first you need to have the wherewithal to identify or know to look is this app or this program using Gen AI and then have an understanding. What are they doing with my data so that I can make sure that that's being trained? If they're using that, disseminating that to a third party, what kind of information am I actually going to be inputting or using in that program? And if I do so, is that in line with my confidentiality obligations as a lawyer?


Lindsey Wagner: 12:26

From a business perspective, even if you're not per se regulated with your confidentiality obligations, inevitably businesses have a certain consideration for confidentiality, maintaining the integrity of their data to keep that business advantage. And if you're going to subject your employees to confidentiality agreements and then find that employees breached an agreement by disseminating information that might be as part of the litigation, well, what have you done, employer, to make sure that you're maintaining that confidentiality yourself? And if it so happens, during the discovery process? Hey, we were actually using all these different software apps that had Gen AI built in and they're training their models and so forth. Have you really taken steps as an employer to maintain the confidentiality you might lose out on that confidentiality argument to maintain confidentiality between you and the employee if you yourself, as an employer, have not taken affirmative best practice, business steps to maintain the confidentiality of your own data and your own business practices?


Lindsey Wagner: 13:30

So those when we look at integration, it's not just hey, let's integrate this from a software consideration. Are we maintaining it for legal purposes? Do we have an obligation? But also looking at it from the big picture, if you use this, what are the pros and cons? How are these going to implicate things from a business perspective in the future? And it's just a ever going consideration conversation that you're always going to have to be having Reviewing updated terms and conditions to make sure that you're maintaining that confidentiality. If that's crucial to your business Most businesses it will be.


Pamela Isom: 14:09

But that makes it, that seems like that makes it even more difficult, because I think about large language models and that's why earlier I mentioned the micro models and the smaller language models have been on my mind lately and I've been pursuing those because I feel like that there's not only the fine tuning of the model, so smaller language models are models that are smaller as far as their responsibility, so they're there to do a specific task where the larger language models are based on way more information and they're not as fine-tuned. They're broader. So ChatGPT, for instance, is a larger language model where something like a smaller language model could be one of those that sorts through employee applications and anonymizes data. That's all it does, that's its job, and so in those kind of situations, I feel like and I don't know this for sure, I'm just starting to lean this way that the smaller language models may be a way to be and drive more confidentiality and help with the guardrails that we're trying to put in place. Help with the guardrails that we're trying to put in place.


Pamela Isom: 15:49

So, again, you still have to be careful. As you pointed out, we would still have to be careful with updating the terms and conditions, notifying clients and employees, be careful how you're using the models. Maybe we want to contain the models to just work with data within our own organization that's proprietary to our own organization. So that would still stay the same, and that's what we're doing today to help with the confidentiality perspective. But I feel like to mitigate the risk even more. We don't want so many micro models running around our organization until we can't keep up with them. But I think we should start to build in some strategies around. Where are those niche models and what would they be used for? Are those niche models and what would they be used for? And so that's what comes to my mind as we talk about that, because confidentiality and some of the other things that we have to deal with are so important.


Pamela Isom: 16:46

The other thing I've been thinking about is we always talk about quality data and quality assuring our data, and I think about simple practices like clarifying what a customer really means. What does a customer really mean? What does a invoice really mean? What does a bill mean? Because those kinds of things will help safeguard how we protect the information. What does an address mean, how we protect the information? What does an address mean, and are we consistent across the organization on what that means. Is a mailing address the same as a correspondence address, and what correspondence should go to which address? So it's all these things that we have to think about, and now be sure that we've integrated it as a part of that AI integration. What do you think about what I just said?


Lindsey Wagner: 17:42

Super interesting. Two thoughts come to mind. First, with regard to the small language models, I think that's a really great point, and I've actually seen some AI tools coming out specifically around legal community Because of that concern of housing all your confidential information, your client information where they do integrate a small language model for learning your own internal system, your own data, and that lives within your own clouds, integrated that way, but it's also insulated that way, so it's only your data and from there, even some organizations or AI developers have said we don't even have access to that information. It's yours, it's your data. We build this AI system around your data, and so that has been a solution that I've seen some organizations develop. Now it's not, of course, going to have the power of an LLM, but at the same time, it can address those kind of confidentiality issues and so forth. So I think maybe that's something. Even Harvey AI is one of the ones that's been built around, like the legal community, and one of the things that they've advertised is how they learn from your own data. So that's a great idea there.


Lindsey Wagner: 18:59

When it comes to defining terms and having consistency within the workplace, I think it's really important. One of the things that I do is workplace training, and a lot of times it's training employees and also management about how to address a workplace concern. And so a workplace concern, concern, complaint, issue, conversation all of those are different terms. But let's just say an employee has an issue about a pay or discrimination in the workplace, they might approach a manager or HR and use a lot of different kind of language to address that concern. And, from a human perspective, we often run into issues where a manager is not really trained and they say well, I didn't perceive that as being a complaint, so I didn't know to escalate that because the employee didn't use the word complaint and their concern. And then it becomes an issue Was the employer put on notice of the concern?


Lindsey Wagner: 19:57

Did they have an opportunity to address it? Issue was the employer put on notice of the concern? Did they have an opportunity to address it? And so, to the extent that there would ever be an AI model developed around employee concerns, identifying what is a complaint is something that plagues all of us lawyers whenever we're litigating these matters, if it has to do with retaliation and so forth. And so, just as an example of how we in the day in, day out practice of employment lawyers have to deal with. Well, what did you really mean by a complaint? Was this a concern? Was this just more of a putting us on notice of this? Were you just asked? Was this a question? But you weren't really complaining and all of those issues Exactly.


Pamela Isom: 20:37

Was it a chat? Were we just having a chat?


Lindsey Wagner: 20:40

Yeah, and we didn't know you wanted to escalate this, we didn't know you were expecting action, and so that's a really great example where intelligent AI tool could really capture all of those issues and fine-tune that a little bit more to address those kinds of concerns that plague employers in litigation and day in and day out when we're dealing with employee concerns.


Pamela Isom: 21:02

And if we go back to some of the requirements that we have to do if we are looking to utilize AI or just practice good data stewardship, to begin with, we want to capture and have an approach in place so that we are using and applying some consistency when it comes to our data.


Pamela Isom: 21:27

So now you can see the correlation and bringing this up to help share that correlation between when we're hearing people talk about why data is so important. Here's why it's so important because it has so many implications not only the fact that you can have too much data within the organization that you don't know how to manage, but also because it's influencing decisions and it has just a huge impact on the organization. So it's important to take a step back a huge impact on the organization. So it's important to take a step back, especially if you're looking at using tools like AI that's going to depend heavily on the data and go back and look at when we talk about data curation, when we talk about cleaning the data, this is what we're talking about and this is why it matters. So I wanted to bring that out and I think you just helped make the case. You just helped make the case.


Lindsey Wagner: 22:22

Okay, and that's so interesting, pam too, because the other consideration and the big conversation around AI in the workplace, of course, is the regulations and about the opportunity for bias to impute the decision-making. And if you have bad data or if it's being trained on data and it's integrating a bias or a slant to that, a lot of employers might not have an understanding of the way that the algorithms work, either because they don't have a technical background or because it's part of a black box of the developers. They're not ready to share that information because it's proprietary information. So now we're seeing states propose legislation. It's a conversation on a daily basis with a lot of legislators about how are we going to ensure that this AI isn't going to amplify biased decision making and put checks and balances around that to the extent that employers are going to be relying on these tools to make employment decisions. And so more recently last month, we saw Colorado become the first state to actually pass the AI Act that's going to impact regulations in the workplace be in effect in 2026. Likely at some point in time, california and New York estates will follow after the New York City Local 144. That was the first legislation to take effect, but all of those come together when you say about cleaning the data. Also to make sure that, if you are going to use an AI tool in the workplace, to consider the idea of the impact assessments or bias audits to ensure that you're engaging not only in best practices but also staying compliant with the law.


Lindsey Wagner: 24:15

One of the interesting things about the conversation of AI tools is a misconception that, because you might be located in a state as an employer that's not regulated AI per se, therefore it's a wild west and by all means, it is a little bit of a wild west when it comes to using AI.


Lindsey Wagner: 24:34

But as far as how AI use in the workplace is regulated, one of the big missions of the Equal Employment Opportunities Commission, which is the federal agency that regulates workplace laws from a federal level, knowledge and information on behalf of the EEOC across the world really to remind the workers and the workforce that, hey, just because AI isn't per se regulated in the workplace, we can still apply the already existing federal law to these practices.


Lindsey Wagner: 25:12

So, for example, the EEOC's issued technical guidance for employers using AI tools in the workplace on how to comply with Title VII of the Civil Rights Act of 1964, how to comply with the Americans with Disabilities Act, and those are great guidance documents that can help employers when you're considering integration, especially if you're wanting to engage in best practices and really get ahead of the curve before you're in a state that's per se regulating this as well. Because even though those practices might not be regulated, we've already seen litigation, most recently against workday, challenging whether or not their practices can actually one. Can they be liable as an employer under these existing federal laws and we'll see. Probably that decision come out any day now we're recording this in early June and so perhaps when this podcast is live we might have an answer here about that. But we're seeing, even in states where California, for example, doesn't have a legislation passed yet regulating AI, that still those practices are being challenged under existing federal law.


Pamela Isom: 26:27

Well, I like that. You brought up the EEOC and the ADA, so both of those. When you think about AI and you think about ethics and equity and civil rights and human rights, so that's where you go. First of all, you use common sense. Then you go look at legislation like that. So, if you aren't sure how to go about putting safeguards in place, how to go about putting tests in place to guide the model outcomes, putting tests in place to guide the model outcomes, go to legislation like that, because you cannot violate our civil rights, you cannot violate our human rights. You must respect our rights. So go to legislations like that. You shouldn't have to. But if you aren't sure, do that and include, when you're working on policies, for instance, include concepts like those and things that are required in your policies, and it's a great way to verify and validate outcomes. So you're trying to figure out, like people say well, some will say, well, how do I know if my model is biased? Well, you can trace your test cases back to some of the legislation. And then another thing is the EEOC and the ADA. One of them, the ADA, talks about reasonable accommodations. Well, what are reasonable accommodations? Is the AI? Are we able to use AI, so that maybe it helps us come up with good reasonable accommodations or alternate accommodations to address a situation that's a good use case for AI. Is the AI going to be hindering reasonable accommodations for a user? So there's two sides to the equation. Is it going to exclude me from a job opportunity, for instance, because of my gender or because of my race? So how do you test for those types of situations? The test cases aren't in the EEOC, but the things to test go to the EEOC. If you just don't know or are overwhelmed and not sure what to do, go check out some of that legislation. It's always good to have the practical examples and a good frame of reference.


Pamela Isom: 28:55

The other thing I want to point out is I think we should incentivize employees to come up with good use cases to use tools like this. So come up with some good use cases and incentivize them. What's the incentive for using and identifying? Maybe there's a challenge where we come up with great ideas and the solution is inclusive, and so how are we incentivizing employees to do so and our staff? So I think we should think more about that.


Pamela Isom: 29:32

But those are good points that you brought out, and one thing I want to talk about before we run out of time here is policies. So you mentioned policies earlier. I'm going to go into some of the things, because you and I have some teaming that we're doing around coaching and training, which I love. So we're doing some coaching, we're doing some training, we're doing some guiding organizations through how to go about creating policies. And I know one might say, well, why should I do that when I can just use AI to create the policies for me? So we gave some examples of refer back to the EEOC and some of the legislation from there, refer back to the executive order from the Biden administration. So those are things you can refer to.


Pamela Isom: 30:20

But sometimes you need that personalized attention to help us through some of the challenges, to help us think about like, for instance, the example that you shared earlier. Not everyone realizes that AI is in the tools that they're acquiring, is in the tools that they're acquiring. What kind of questions do we ask our vendors to safeguard and help us to not lose the trust that our clients have placed in us because we are violating confidentiality agreements, things like that. So our training that we're talking about doing, that we are starting to undertake, I think is a great way to help people to understand more about some of the safeguards that you want to put in place, to help them to understand more about ethics.


Pamela Isom: 31:09

What does ethics really mean? What should be some key items in policies, and so I just want to point out a couple of points that we have been talking about. So I have one of the policies in front of me here, and so one of the sections says choice of law and jurisdiction. So, just as an example, there is regulatory requirements, and they're at the state level that you talked about, and they're at the federal level. So sometimes it's hard to understand what's applicable to you. So that's why we're including this in our training and as an example of one of the components within a policy, what do you want to say about that? What do you want to add?


Lindsey Wagner: 31:57

I think what the realization for employers is employees are using Gen AI, whether or not you condone it, whether or not you say it's okay. And I've talked to friends and colleagues and they're like, yeah, okay, maybe my employer gives me a work computer and they say you cannot take the data off of there. They're going to go use their personal computer, they're going to use their phone, they're going to be using that because it makes them more efficient and it gives them help with creativity. So that goes back, pam, to all your ideas about this is the good. So there is a lot of good, but I think the problem becomes and why AI use policies are so important, but also coupled with educating employees. How does the AI work? Because it's a lot of bells and whistles. Turn on the news you read, you know, you talk to friends. It's like, hey, it has the capacity to do all these cool things and make our lives easier and that's amazing and that should be celebrated. But also for an employee to understand what are the implications. It could be as simple as saying okay, look, if you're going to use a chat, gpt, a cloud or whatever, make sure that your settings are such that it's not learning from your information and you can use it for this, that and the other tasks, but you can't use it if you're engaging in these tasks. And so to really think about, okay, well, what information really is crucial for confidentiality for us and what are we going to permit and what aren't we going to permit, but with the realization not being naive and thinking well, if we say no, our employees aren't going to be using it. I think it's important for employees to understand why, if you're going to prohibit use of that tool or that type of tool for a certain category of information, why is that? Why is it? Because this is new and they might not understand the idea of it getting integrated, the idea of it learning.


Lindsey Wagner: 33:50

When the Gen AI was just coming out, there was a story reported in the news about how a Samsung employee may have entered in some proprietary code into a Gen AI like ChatGPT and that then was integrated into its system and there it was for the public to use for any purpose. But at the time, it's possible that individual might have not understood what are the implications of putting in this code into this particular program and why is that an issue. And so, at least coming from an education standpoint, then it makes a little bit more sense to somebody. You're not just being archaic and saying, hey, we have these rules and you have to follow them, but understand why. So then when an employee goes to use a program, they can be a little bit more mindful of their use or considering, you know, which cases might be possible to use and not.


Lindsey Wagner: 34:40

And so, pam, when you said also about giving employees the opportunities to use tools, one of the cool things that I've seen is employers that will say, okay, well, let's have a sandbox here for employees to play, and if there's certain employees that want to engage or tools, then we'll give you a closed circuit opportunity to test out those, so that we're not going to implicate all of our company data at large, but let's try this out and see how it works, maybe with a select team or something else, and if this does seem like something that can make us more efficient and something that we can roll out at larger scale safely, we'll do so. And so then it gives the opportunity for creativity, for innovation and with the realization that these tools are here and they're in the workplace and so we can't just say we're not going to use them. We're not going to allow AI because, inevitably, if your business is going to be surviving in this lifetime or the next, you're going to be facing these issues, and it's something that we all need to be addressing.


Pamela Isom: 35:44

Yeah, I agree with you. I think I was speaking with a colleague not too long ago and we were talking about one of the things that we want to do is is imagine an organization that has no policy, that has no governance. When people are motivated to use the tools, what are the rules, what are the guidelines? And if they don't have any, that doesn't mean that they're not going to use it. But what are you doing to reduce the confusion? And even if you I don't recommend this but if you say AI is not permissible within the organization, are you explaining? Why Do you even understand? Why not Explaining? Why Do you even understand why not? So that's where you want to make sure you start thinking through the policies, the transparency, the disclosure, components like data protection. What is data protection? We gave some examples here. Why is quality so important? So those kinds of things.


Pamela Isom: 37:00

If you understand why it's important, then you'll know that these types of activities should be a part of your policy. So, for instance, maybe a policy is we are not going to activate. We can have proof of concept. So we're not going to activate the chat bots or the AI agents until we have gone through the ethics review process. That's why those are important, and I won't go into all of it, but I appreciate the fact that we have our collaboration. I'm glad that we met when we did and that we saw the need and started to look into working together, because we saw a legal need as well as a technology, and I think what I always stress is that we also realize that none of this is effective without considering the human being.


Pamela Isom: 37:59

I appreciate the collaboration and I just wanted to point that out. As to why and I didn't even touch on the intellectual property piece, but that should be a part of the policy so what we're doing is important and hopefully will guide some even more robust practices when it comes to AI adoption and integration. So, lindsay, we're running out of time here because we've been having such a good time. So one of the things that I always do is I ask the guests to share words of wisdom or experiences for me as well as for the listeners, but always before I do that, I ask is there anything else that you wanted to discuss while we're together here, before you share or impart your words of wisdom or guidance or insights?


Lindsey Wagner: 38:58

I tell you that this conversation we could be talking for days about all the implications, but I think I would go back to and this is discussed, but words of wisdom so combination, just to overemphasize the importance of employee training when it comes to AI. And it's interesting because, with regard to anti-harassment in the workplace, we saw the Me Too movement and from that we saw legislation develop, now states mandating that not only do you have to have a sexual harassment policy in place, but you have to train your employees on that policy. And why? Because if you just give a policy, how many employees are going to read that policy? How many employees are going to read that? How many employees, how many individuals listening to this podcast right now, have read through your entire employment handbook when you've worked at an agency or when you worked at a company? Probably not.


Lindsey Wagner: 39:49

And so if you're not trained on it, if you just get that policy, you're not going to have an opportunity to understand why is this policy in place, until you need it, until you're like, hey, what's our PTO policy? I want to take a vacation, I'm going to go read it now. I think that's one thing to consider is that this isn't any different than other policies in some ways that we've seen before, other ideas we've seen before, and think about the ways that we've built success around those. It's been through developing a policy, understanding parameters of what's permissible and what's not in the workplace, and then educating your workforce on doing that. And so it's feasible to do it. It's just need to understand and embrace the fact that this technology is here, it's integrated into our programs that we're using in the workplace, and if we want to get ahead of it or it's not even getting ahead now, it's really catching up is to get a policy in place.


Pamela Isom: 40:45

Okay, and then you also pointed out I remember earlier that we want to remember that AI is pretty much infused in everything. So you mentioned the example of Slack. I should point out, too, that it's starting to get integrated in the operating system.. There are algorithms. It's there for a reason, and so you want to understand that AI is integrated in everything pretty much that we do, whether you realize it or not, and that's an example with the new solution that's coming out, and it's in more and more products, and they're there for a reason and the intentions, I think, are good. You just have to have governance in place and privacy protections and education and training in place of employees to help to better understand how to utilize these tools and technologies for good. And so thank you for your insights and I really do appreciate the opportunity to be able to talk to you today. I appreciate you just taking time out just to be here. So nice to talk to you, lindsay.


Lindsey Wagner: 43:16

Thank you, pam, and the one thing that I'll add at the end, just as a parting word, kind of to emphasize on what you just said too, is just the importance of the idea of taking away, of upskilling.


Lindsey Wagner: 43:26

So I think from the conversation, of this, from your entire podcast, is that taking the time to learn the concepts of AI as an employee, as a business person, whatever, will not be for nothing, because it's going to either benefit you as an individual, add to the value that you are as an employee, it's going to help you to protect your business, to make it more efficient. So, taking the time now, it is a work in progress for me. Every morning I wake up and, with my coffee, ai has become my hobby. People say what are you doing as a hobby? Like, I like to work out and I'm an AI nerd and that's my hobby now. But it does demand a lot of attention because it's constantly changing. But having that familiarity with it, not being scared, embracing it it's here to stay, I think, will be good for everybody, and so cheers to continuing to learn and innovate and approaching things in an ethical way that's legally compliant.


Pamela Isom: 44:27

Alright. Well, thank you very much, and so we'll wrap it up here.