AI or Not

E030 - AI or Not - Hal Daumé, Amy Cyphert and Pamela Isom

Pamela Isom Season 1 Episode 30

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Discover how artificial intelligence can be molded into a trustworthy ally in our society with insights from our distinguished guests, Hal Daumé and Amy  Cyphert. Hal, a trailblazer from the University of Maryland, shares his fascinating transformation from a purely technical AI expert to someone deeply engaged with its societal implications. Meanwhile, Amy, a legal scholar from West Virginia University, brings her unique perspective from Carnegie Mellon, offering a compelling narrative on how law and technology must interface to ensure AI's responsible evolution. Together, we unravel the tapestry of cross-disciplinary collaboration needed to forge AI systems that command trust.

Venture into the murky waters of AI regulation, where copyright dilemmas and intellectual property rights present significant challenges. We examine the tightrope artists walk when using AI in their creativity, juxtaposing heightened innovation with the current inadequacies of copyright law. Through this discourse, we highlight the urgent need for a broader regulatory framework, emphasizing the power of executive orders and the necessity for global cooperation in AI governance. As we stress the vital role of local institutions and elected officials in shaping these policies, this episode promises to equip you with a nuanced understanding of AI's legal landscape and its societal impact.

[00:17] Pamela Isom: Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and your digital transformation journey.

[00:33] I am Pamela Isom. I'm so excited to be here today. I want to thank you all for having us. And we have two people that we are going to be speaking with.

[00:44] We have Hal, and you might have to help me pronounce the last name, but I think it's Daumé

[00:52] And then we have Amy Cyphert. So thank you both for joining me today. We are live. So thank you so much for being here. The way that we're going to start this out is I would like you to answer a few questions that I have, but we will use a conversational approach for the listeners.

[01:12] This is live. The theme is AI at Work Building and Evaluating Trust. And I am at the Institute, working with the Institute of Trustworthy AI in Law and Society. And we are at a conference here in D.C.

[01:29] so let's get going.

[01:31] Hal, let's start with you. Tell me more about yourself, your career and your trails journey.

[01:38] Hal Daumé: Sure. So first, thanks for having us. Thanks for coming down here to GW to meet with us. And yes, I'm excited to talk about this. I am scared by it being live.

[01:48] And you can't edit out silly things that I say. Hopefully there won't be too many. Yeah. So I'm Hal. I'm a professor of computer science at University of Maryland. I've been there since 2010.

[01:59] And, you know, when I started doing research, I was, I guess what I would call, you know, sort of like very strongly a technical AI person. Like, almost all of my work was about building new technology and pushing it forward.

[02:12] And, you know, I lived in this research area called natural language processing, which actually I couldn't even find when I was an undergrad because no one had ever heard of it until one of my friends told me, hey, there's folks who do, like, language computer stuff.

[02:25] And I was like, wow, that sounds amazing. I want to do that. And, you know, now we're in this world where, like, I sit down on the plane to get here and, you know, the person sitting next to me wants to talk about natural language processing, which is like, wild to me.

[02:38] And I think that, you know, this reflects like this broader trend of the role that AI technology is increasingly playing. Right. And I think people think of it now because of things like, you know, ChatGPT and all these chatbots, but it's been with us for a while.

[02:51] Right. Like, I think prior to Chatbots, it was recommender systems was the main way that people interact with AI technology.

[02:58] Maybe just it didn't quite have the same buzz. And so, you know, I think this societal shift toward AI being less of a purely academic affair into something that, you know, really is impacting people's lives on like a daily or hourly basis.

[03:12] Like, also sort of reflected my own shift in interest to thinking.

[03:18] Yes. Continuing to think about the technology side and how do we push AI technology development, but also, like, what does that mean for the people who are using the systems and what does that mean for society in general?

[03:28] And so I think the second, you know, sort of like the shift toward this more like human and social direction is, you know, I think what TRAILS is sort of reflective of from my perspective.

[03:41] Pamela Isom: And so how do you see the TRAILS activities? Meeting your goals and objectives?

[03:52] Hal Daumé: Yeah. So, you know, I'm a. I'm essentially an engineer. Right. Like, I live in a computer science department, but I basically build technology. That is what I do. And you know, as someone who builds technology, there's just a lot that I don't know.

[04:08] Right. I am not a sociologist. I am not a governance expert. I am not a lawyer. I am not like, any number of things except for engineer. Right. And.

[04:17] And I can't be right. Like, I'm not going to go get like five more PhDs so that I can do all of these things myself. Like, that doesn't make sense.

[04:25] There are people who are way better at them than I am. And so I think for me, TRAILS is this amazing opportunity to build connections with the people who do and understand the things that I don't do and I don't understand.

[04:38] And that takes time and it takes conversation and it's hard. But I think that's, you know, for me, that's what's exciting. It kind of gives me an excuse to talk to people that, like, I wouldn't have a chance to talk to or work with otherwise.

[04:50] Pamela Isom: Well, I want you to know that I appreciate your work. I have notes here that you are the professor of the Department of Computer Science, University of Maryland, and the director of trails.

[05:01] So I know that must be exciting for you. And so that's why I was asking you the questions about that, because I think that the humanities side of the human factors of AI, that's deep.

[05:16] That's a deep conversation. Right. So, because we're dealing with computer science and we need the human factors. So I certainly appreciate the fact that you are the Director and looking at doing things to bring the trustworthiness about and let it evolve as we are evolving.

[05:34] So, Amy, you are associate professor of the West Virginia University College of Law.

[05:42] And so I'd like you to tell me more about yourself, your career, your trails and journey, and AI at large.

[05:53] Amy Cyphert: Great. Thanks, Pamela. And thank you again to everyone from trails for inviting me to be here. It's really great to be here. I think my AI journey has been really providential.

[06:03] It's been a lot of being in the right place at the right time and being curious. So I know we've got a lot of students out there, so keep being curious.

[06:10] It's really important. I went to Carnegie Mellon as an undergrad, and so you may be going, oh, like Hal, she's an engineer. No, I was a creative writing major, but I was at a school that had a really strong technology feel to it.

[06:25] And so I got really kind of interested in ideas when I was an undergrad. And then I got really lucky. My first year of law school. My torts professor who I was randomly assigned to, it was his first year teaching torts, and he was this brilliant Internet scholar who went on to run the the Berkman center for Internet and Society at Harvard Law School.

[06:45] So us torts through the lens of Internet and AI and other things. So that was really exciting. And then I clerked for a federal judge after law school and worked for a large law firm in New York City for a few years.

[06:59] And when it was time for me to come home, because Morgantown, West Virginia is my hometown. So it was really exciting for me. I've been at WVU since 2010. Like how has in a variety of contexts.

[07:10] But when it was time for me to come home, that was really great. And I started doing some scholarship. When lawyers, when law professors say research, we just mean we write law review articles.

[07:21] But I started writing some law review articles on things like the use of machine learning algorithms for online school surveillance. And that particular piece, which had a lot of privacy dimensions, came out just as the pandemic hit and everybody's kids shifted to online learning.

[07:37] So that was good timing. And then that particular spring of 2020, I was home with my third grade twins and my pre K youngest and they were all doing remote Lear.

[07:49] And I kind of needed a way to stay sane. Trying to work from home and my husband work from home and our three kids do online learning. I started reading about this thing called GPT3 and my mind was blown.

[08:01] I was like, this seems like such a game Changer. Especially for people like lawyers who so much of what we do is writing, right? We're highly paid rhetoricians in one sense of the word.

[08:12] And so I wrote an article about lawyer use, the ethical dimensions of lawyers use of GPT3 or other large language models. And then when ChatGPT came out, you know, a year or two later, that was one of the law review articles at the time that was focused on large language models.

[08:28] So I was able to be at the start of that conversation, which was great. So at wvu, at the law school, I teach classes on AI and law. I just finished teaching a seminar on regulating artificial intelligence this past semester.

[08:41] So that's one of those classes where you try to not even really have a syllabus. You're like, we're kind of follow the news and we'll see what happens this week.

[08:49] And we' just as we go. So it's really exciting to me. And I love thinking about how AI fits into larger societal questions, because that's the point of regulation. The point of regulation is to maximize the good and, you know, minimize the bad.

[09:07] And so does our current AI regulatory landscape do that? Right? I have a forthcoming law review article arguing, not yet. We'll get there, but not yet. But that's my AI journey.

[09:18] Pamela Isom: Why not yet?

[09:20] Amy Cyphert: I think that it's really difficult to get AI regulation right for a variety of reasons, including reasons that have been on large display in the news over the last month or so.

[09:31] I think that we've had a paralysis when it comes to doing AI regulation at the federal level in the US and that's because it's hard and the stakes are big and people don't want to get it wrong and they don't want to be somebody who ties risk levels to compute and then is mocked by all of the, you know, folks are like, that's not the right way to do risk, or that's just a moving target.

[09:53] Or they don't want to do something that is supposed to help innovation, that instead stunts innovation, or they don't want to inadvertently get the export controls wrong and have our competitors end up with too many of our GPUs.

[10:05] Right? There are a lot of reasons it's very hard to do. And so we've decided maybe we just won't do anything. We'll take a wait and see approach. The problem with that is the regulatory landscape exists already.

[10:18] So a choice to not regulate AI is actually a choice to say, AI will be regulated by copyright law, AI will be regulated by antitrust law. AI will be regulated by privacy law and AI will be regulated by private tort law actions where people sue companies for harm.

[10:34] And in my article I talk about how I think there are deficiencies to each of those. None of those reflect all of the various considerations that we all have as a society.

[10:44] They can't possibly express all of the things that we want our regulations to do for us. And so we're not there yet. I'm hopelessly optimistic. I still think we'll get there because we have to and I think, you know, events like this really help advance the ball.

[11:01] Pamela Isom: So we were talking, I was in a discussion earlier with Amy and a couple other discussions and we were talking about the copyright, the latest and just copyright laws in general and they're very scarce, you know, so there's this recent conversation, I think it's a recent publication but it builds on top of what was in the past and it has a lot to do and it speaks a lot about how much human control you have and how much ingenuity the human has brought to the table as to whether you will be given copyright authority or not.

[11:40] Right. Granted copyright privileges or not. What's your take on that?

[11:46] Hal Daumé: Well, so I guess my first take is I'm not the lawyer sitting up here.

[11:50] Amy Cyphert: It's okay.

[11:50] Pamela Isom: I, I want to get a non lawyer's perspective.

[11:53] Hal Daumé: Yeah, that's fine. Right. So I guess there's you know, there's sort of like two sides to the copyright question. There's the like use of copyrighted materials and training models and then there's the copyright question of like can the outputs or can works of art be copyrighted when AI played some sort of role in the process of its creation.

[12:14] Pamela Isom: Right. So let me say because I sing, right, and this, the music's in my head but the, the I don't write, I don't compose the music. It's just in my head.

[12:25] So I want to have AI create the music for me if I sing the song and I won't do it because I don't. It's my stuff. Right. So that's why I asked the question like I'm not a lawyer either, but I think about these kinds of things.

[12:40] How do I trust my intellectual capability with an AI? Do you have concerns like that?

[12:48] Hal Daumé: Yeah, I mean I think we all should, you know, I think that. So this is sort of on the, on the more production side, you know, I guess first like most of what I know about this is through conversations with another trails member, Bob Browneis, who's in The GW law school who studies things like copyright and First Amendment and those sorts of topics.

[13:08] And you know, one of the things that, you know, kind of surprised me, kind of didn't as a non expert, is that, you know, like, the amount of effort I put in to something doesn't seem to have much to do with whether it's copyright writable or not.

[13:20] Like, there are criteria and that is, generally speaking, not one of them. You know, I think I have a really good friend who's a composer and singer and I've talked to her about her experience, sort of like using AI, and she's expressed two sides which I think, you know, I hold to.

[13:41] Like, one is, you know, this lets her do things that she couldn't before do, right? So she, she performed a piece, she lives in la, that includes vocals that would simply be impossible to produce without some sort of computer aid.

[13:57] And so like on that side, she sees this as, you know, basically a way to like amp up her creativity, right? It lets her do things that she couldn't otherwise.

[14:07] You know, on the other hand, she shares exactly the concerns that you have, which is, you know, do I have any rights over this? You know, what happens when, you know, I, as a composer am not needed anymore because, you know, like music GPT just writes all the songs.

[14:22] And I think this goes back to the regulation question that, that Amy was bringing up, right? It's like, you know, I think, you know, we want to maximize the good and minimize the bad, right?

[14:31] And so like, I think, you know, my friend who's the composer did a pretty good job of saying what the good is and what the bad is. And, you know, and then we go talk to Amy and she tells us what we do about it.

[14:42] Pamela Isom: So, Amy, now tell us more about what, what can I lean at when it comes to regulations? What, what? Tell me more about the law.

[14:51] Amy Cyphert: Oh, I thought I was going to get an engineering question after HAL got a copyright. So I'm very happy to be asked a regulation question. And I just want to go back for one minute.

[14:59] I think the fact that both of you felt the need to caveat, but I'm not a lawyer, is why copyright law is not the right way to regulate AI entirely.

[15:08] Right? Because AI doesn't just impact lawyers and IP lawyers. And so listen, I love copyright law. Copyright law has a very important role to play. I'm not saying abolish the Copyright Act.

[15:19] That can't be the basket we put all of our eggs in. It's just not a large enough vehicle for all of our societal concerns, hopes, dreams, et cetera, with regulation, where do we lean?

[15:30] I mean, we're going to see, I think, and we are seeing just over the last 10 days, really rapid change. So it's been an increasing trend in recent administrations where Congress has had some deadlock to do more regulating, to do more kind of lawmaking via executive order.

[15:49] Right. Which is where the President can do some things more unilaterally and doesn't have to try to get Congress to go along. But we're seeing both the pros and cons of that way of regulating.

[16:01] Right. So the Biden administration had a pretty comprehensive executive order on AI that was entered, I think, in October of 2023. And one of the very first things that the new Trump administration did was to remove that executive order, right?

[16:16] To rescind it. Although it's a little unclear exactly what the impact of the rescission was there, and there may be parts of it that are around. I don't want to offer anybody any unsolicited legal advice on that.

[16:29] So I think trying to do things via executive order is going to be hard. The real hard reality that is hard. And a reality is that this is going to require global cooperation.

[16:41] Okay? AI is a global issue, and it's going to require global cooperation. AI does not respect borders. It's not going to stop. I mean, I think the Deep Seq story that the President was referring to earlier is in some ways an example of how export controls are not going to work in AI.

[16:58] Like it might work in other fields that are regulatory targets. Though the founder of Founders of Anthropic has a great essay up right now that was just out this week on why he thinks Deep Seq is a story of how export controls do work and maybe are a good thing to keep thinking about.

[17:15] So in terms of what you can lean on, I think you have to lean on speaking up and telling your elected officials that you care about these issues and you would like them to take action on it, and you would like to think about what it means in your community.

[17:28] I would start hyperlocal. Your universities have AI policies, and you should be part of making sure that those reflect the university values. Your companies no doubt have AI governance documents, and you want to be a part of creating those and helping make sure they're good.

[17:46] Sometimes people say to me, oh, are you an AI optimist? Are you a. And I always say, I'm an AI pragmatist.

[17:53] Really excited about AI. There's a lot of really good that can happen. You know, my university is in a relatively poor state, but a state that could have tremendous, great changes that come in for AI.

[18:06] If we do it right, it could really improve a lot of things for the people in states like West Virginia. Or it could be an unmitigated disaster. That is another example of extraction capitalism and kind of messes things up.

[18:18] But I want us to get it right because there's tremendous upside and I think we have to work together across universities, across disciplines, and ultimately across nations if we want to see that.

[18:30] And I want to just be clear. I'm aware of how unrealistic it is to say we need global cooperation at a time when large multinational treaties and organizations are under attack and people seem to be turning away from them.

[18:45] But I don't see a way forward without some level of global cooperation. And things like the Bletchley Declaration that happened in the UK last year are a great example of where maybe we can start to move in this direction.

[18:58] Pamela Isom: Well, that's good. I appreciate your comments there and I know we're going to run out of time here. I was thinking about the deep seek situation. So the thing that I will add to that is I was in a conversation with someone recently and I was expressing to them that we really need to.

[19:14] Speaking of trustworthy AI in law and society, on the society perspective, we really want to think about counting up the total cost.

[19:24] So yeah, it's cheaper and supposedly it is more energy efficient. I'm not so sure. But what about our data? What about our information? Right? So look at the total cost.

[19:38] Everything that is supposedly cheap really isn't. You know, that's why programs like this are so valuable. That's why I'm so happy to be here. We are at the last part of the podcast where I typically ask the question, share with the listeners words of wisdom or your call to action.

[19:59] So we're going to start with you, Hal. Please let us know your perspectives there.

[20:06] Hal Daumé: Yeah, that's a hard question, but I'll give it a go. So I'll come at this from the, from my perspective, which is that of, you know, a technologist and engineer. And you know, the thing that I say a lot that I'll say in this context is I would really like work on the technology side of AI to think a lot more about what's often called AI for augmentation rather than AI for replacement.

[20:33] So the field of AI going back to the 50s is really a study of automation.

[20:43] And it was originally the Turing Test is basically like Can I pretend to be a human? Then it's like, can I play checkers? Can I play chess? And all of these things are basically, can I take this thing that's like, we used to care about things that were hard for people to do.

[20:57] Now actually, we care mostly about things that are easy for people to do. But, you know, it's really about this. Like, can I just take the human out of the picture entirely?

[21:04] And I think that's a.

[21:06] That's both kind of a boring perspective. Like, it's. I don't know, I don't find it particularly intellectually engaging, but I think also it's where a lot of, like, challenges come from.

[21:17] So, you know, you see things like, oh, with this technology, I get improvements in, like, employee efficiency by 20%. Right? And it's like, okay, you know, that's great if I own a company and I'm trying to, like, maximize my profits or whatever.

[21:33] But, you know, I'm in academia for a reason. That is not my goal. Right. And so I think that it's much more interesting to think about, like, how do I improve the lives and the work of people who use these technologies?

[21:47] So how do I develop AI techniques that really help people do the things that they want to do rather than, like, like, get rid of the people in the loop entirely?

[21:58] And I think intellectually that's actually a much harder question. And among other reasons. One of the reasons is because we generally have data for replacing people, right? Like, I just track what you do, and then I train a system to replicate it.

[22:12] But if I want to build a system that, like, helps you do whatever it is that you're doing, I can't just, like, look at your output and try to repeat that.

[22:19] I have to, like, actually understand, like, okay, what is your workflow? Like, how are you using different pieces of technology and so on. So, you know, I think my.

[22:27] What I would encourage students who are, you know, maybe on the more technical side of things to think about is, you know, how do we build technology that helps people do what they want to do rather than doing it for them?

[22:41] Pamela Isom: Okay, that's very thought provoking. Okay, Amy, what's your words of wisdom or a call to action or both?

[22:50] Amy Cyphert: I love that. And I'm going to piggyback on, on what Hal said because I was inspired, moved to action by what he said. Most folks in this room, you just got great advice for folks who are more on the technical side.

[23:02] And I would imagine that's a lot of people in this room. But maybe Pamela Some of your listeners are not. So my call to action is to folks who are intimidated by AI because they don't understand it, and it seems very opaque and very confusing.

[23:15] And I'm not going to suggest that it's not. But I am going to say, educate yourself, try to learn more. You do not have to understand every part of AI in order to have an informed opinion and to have a seat at the table where you're advocating for the way AI is used in your workplace or your community or your state.

[23:35] And so my call to action would be to folks who maybe feel like I'm intimidated by this and I don't understand it. Maybe check out your local university, ask if they have, you know, an online lecture you could watch or if you could attend a lecture or something.

[23:49] I. You'd be surprised how often if you ask a professor if you can sit in on a class, that they are thrilled because that's very exciting for us. And you'd be surprised how many people will respond to your emails or will recommend reading to you or something like that.

[24:03] So I think if you are listening to this podcast and you're thinking, that's great, but I don't understand this, go ahead and make yourself a reading list, watch some lectures and get to a place where you feel a little more confident saying, okay, I still couldn't actually replicate that, but I know what they're talking about and I haven't an informed opinion now because I don't think anyone can afford to sit this one out.

[24:23] I think it's really important. It's going to require all of our voices if it's going to reflect the future we want it to reflect.

[24:30] Pamela Isom: Okay, well, I want to thank you for joining me on the podcast today, and thanks to the live audience for being here. And you're listening to AI or Not the podcast.

[24:41] Thank you very much.

[24:43] Hal Daumé: Thank you, Pamela.

[24:44] Amy Cyphert: Thank.