AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E015 - AI or Not - Brian Spears and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
What if AI could be the key to unlocking the secrets of the universe and revolutionizing national security? Join us on AI or Not as we sit down with Brian Spears, Director, AI Innovation Incubator (AI3) Lawrence Livermore National Laboratory, a trailblazer in computational science and AI , who has been at the forefront of monumental breakthroughs like the historic fusion ignition in 2022. Brian shares his journey from designing nuclear fusion experiments to leading AI strategy efforts, and he dives into the game-changing potential of AI in scientific research, material discovery, and smart manufacturing.
Discover the cutting-edge advancements in AI-driven science within the Department of Energy, including remarkable efforts like redesigning COVID antibodies and pushing the boundaries of cancer therapy. Learn about the groundbreaking Frontier AI for Science, Security, and Technology Initiative (FAST), which aims to revolutionize scientific research and national security through big data and advanced computing. We underscore the urgent need to expand the AI and computer science workforce to keep the United States technologically competitive.
As we wrap up, we tackle the pressing issue of ethical transparency in AI. From personal anecdotes about intrusive driving behavior tracking apps to the collaborative efforts of government agencies and private industry, Brian and I emphasize the necessity of transparent and ethical AI development. We also reflect on the importance of persistence, patience, and strategic leadership in AI, celebrating the pivotal contributions of the National Lab in advancing scientific AI. This episode is a must-listen for anyone interested in the transformative power of AI and the ethical considerations that come with it.
[00:14] Pamela Isom: This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice. Neither health, tax, nor professional, nor official statements by their organizations. Guest views may not be those of the host. Hello and welcome to AI or not the podcast, where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. My name is Pamela Isom and I am your podcast host. So we have a really special guest with us today. He is Brian Spears. Brian is a leader in computational science and AI. He's director of the AI Innovation Incubator. He's a principal investigator. He's my friend. Ryan is my go to person about AI excellence and AI and national security. I so cherish our relationship, seriously. And I love the things that we did together, from hosting me when I visited your lab, to collaborating with Norway, to establishing policy. You're one of those people that I know that you know. So thank you and welcome to AI or not.
[01:54] Brian Spears: Pam, thank you so much for the kind introduction. Yeah, it's a pleasure to be here. I appreciate working with you as well. So thanks again for hosting.
[02:02] Pamela Isom: You're welcome. So, to start out with, will you tell me more about yourself, your career journey, your role, and how you got to where you are today and where you want? Where do you want to go?
[02:17] Brian Spears: That's a deep place to start, Pam.
[02:20] Pamela Isom: That's what we do.
[02:21] Brian Spears: Yeah. Well, right in. Yeah. So, I lead our AI strategy effort at Lawrence Livermore National Laboratory, and it comes from a background of scientific understanding and scientific work. So for the past two decades, I've worked at Lawrence Livermore. Roughly half my career has been spent designing nuclear fusion experiments. So in 2022, laboratory achieved fusion ignition for the first time in human history. We got more energy out of a fusion pellet than what we put in with a laser. Huge historic first. And I was leading the modeling and simulation team as a deputy in that role when we did that. And over that span of time, I've been pushing data science and machine learning and AI tools to help us drive those efforts as well. So it's been a long career of focus on being really patient and working for 20 years to make something happen for the first time. And then now a transformation to understanding what we can do with the AI science that we've been building over the last decade to do something really new in the future.
[03:29] Pamela Isom: That's remarkable. So what new in the future are you thinking? What can you share?
[03:35] Brian Spears: Oh, there's, there's so much. There's been a really great conversation in the country around being safe and being careful and secure in the way that we do AI research, and that's been fantastic. Now we're focusing really clearly on what the opportunities are. There's the promise and the peril, and we're keeping those balance. The promise looks nothing short of accelerating the way that we do science and revolutionizing the way that we do science in a way that's not been seen in 50 or 100 years of modern human science. Some examples include building out AI models for discovering new materials. By materials, I mean imagine a consumer good, like a water bottle that we all take for granted. That's got a polymeric lid, it's got an elastomeric seal, it's got a specially designed alloy that allows it to be made quickly. And all of those material discoveries, the alloy, the polymer, the elastomer, have come in the last few decades, and they came at a rate that has been impressive, but is nothing compared to what we can do with AI. AI models that understand molecular space and chemistry, space and material space, will generate tens of thousands of new materials far faster than ever before, so that we can do things like bring fusion energy to the world, take care of our national security issues like the nuclear weapons stockpile, transform medicine, so that discovery phase is full of potential. And the list goes on. We can use similar models to design entirely new engineering and scientific technology systems. We can build out smart manufacturing capabilities to produce, you name it, consumer goods, national security science apparatus at rates and with fidelities that we've never seen before. And we can use AI to pull that all together, so that you're discovering materials, designing systems, and producing them in a virtuous cycle that is glued together by AI, that will all happen against the backdrop of doing that safely and securely. So one of our fundamental research thrusts is to understand how to do those things and also how to do them in a way that first does no harm and then provides benefit to people.
[06:00] Pamela Isom: Exactly. So that's an area where we work together. Right, do no harm and at the same time add benefits so that there is value that is instrumental and integrated into the design. So it's good to see that that work is continuing. Real quick, let's go back to fusion. So, you said you were leading the modeling and the simulation. Can you let me see? I know there are some things you can't say. So can you tell me more about that experience? And what I'm getting at is when you're leading the modeling and the simulation, which is all a part of the digital transformation and the AI lifecycle. Like, what are some good things and what are some lessons learned in that process?
[06:53] Brian Spears: Well, the good things are we made AI work for us in a really challenging laboratory setting. So effectively what we did, we predicted for the first time that fusion ignition would occur. We'd never seen it before, and our AI driven models said, hey, the most likely outcome is that you're going to ignite this pellet. We'd never seen that before. We saw evidence from that in our traditional modeling by itself. We saw it in our experimental data. And then in a third leg, we combined the experiment and the simulation data in an AI model that looked at all of that and said, yeah, it looks like you've got a greater than 50% chance of igniting, which may seem small if you're not in the fusion world, but in the fusion world. That model for all previous experiments for the last decade has said you'd have a 5% chance, a 3% chance, a next to no percent chance. So seeing that the most likely outcome would be ignition was transformative, and that model has been correct for the next half dozen times that we have repeated that. So the lesson learned is that you can use AI to join simulation and modeling together with real experiment to give you a worldview that those two things by themselves can start to see glimmers of. But it all comes into focus when you join it with the AI model. So that's a pro. What are the lessons that we've learned? We learned that it's hard. It takes a lot of computational effort. So we did hundreds of thousands of simulations. We joined it with dozens and dozens of experiments. And the effort, for any scientific endeavor or business, in the workflow of joining all of that together to make a product that actually does something, it's hard to overstate how much effort that takes. So what you see in the end, the model that you work with is just the tip of the iceberg, and under the surface is all of the digital threading that connects data formats, data ingestion, model training, model evaluation, new simulations to fill out holes in the model space, etcetera. All of that is an enormous amount of work that you have to learn with the team along the way. So it's not a con in the sense of there's a pitfall, but it's a cost, and teams have to understand that.
[09:17] Pamela Isom: I say congratulations to you. I know that it took a while to get there, and I know it took a while to start to really trust the models. I don't know exactly how much time, how long did it take you to finally get there to where you really trusted the models and wanted to rely on the outcome?
[09:36] Brian Spears: Well, I'd say we worked for about six to eight years on AI driven models to get them to be to the place where we thought we could do something really novel with them.
[09:48] Pamela Isom: Okay, so that's good. So I want to congratulate you for the work that you all did around fusion. And I really want to say thank you for taking the time to test and validate the models and do it like what I would say we're supposed to do, which is not rush to get things out the door, but really feel comfortable with the proper vetting and validations in place. And I think that that is so important. I find that today I'm a little bit in between. I feel like that we've got these great innovations that are hitting the market, and I love innovation.
[10:25] Brian Spears: Right.
[10:25] Pamela Isom: So I feel like we've got these great innovations, but at the same time, I'm a bit concerned that things are hitting the market too quick and humans are being leveraged as test beds. So I don't find that. I think this is a good example of how it can wait. Sometimes things can wait so that we get it to a comfortable space. So I love the power behind fusion and the fact that you are able to, it's a good use case of how we're able to use AI for the simulations, for the modeling. It's a great example of how we are applying AI for scientific discovery, for making a difference in the world. But what I really value is the fact that the Lawrence Livermore national lab took their time and focused on delivering a product that is going to add value and that is going to work with reliability. So thank you so much. And I think that's a good example for others. We have to do that. I'm not saying that we aren't doing that. I'm saying we're not doing that enough today. That's what, that's what I'm seeing. So it's very, this is very hopeful. I want to ask you about fast, but before I do, is there anything else you want to tell me about what's going on at the Livermore lab?
[11:44] Brian Spears: Oh, well, Pam, you'll have me here for your, for your entire, entire podcast history. If we talk about all the exciting things in the lab, there are just plenty of opportunities we're working in that first, do no harm and then bring some good space a lot. There are some programs in biology that are just very exciting. Our bio plus AI team has redesigned Covid antibodies again and again, in fact, used an AI driven method to predict what new variants of COVID would look like up in the present day.
[12:18] Pamela Isom: You're right. I remember that when I was there. So when I was there with you, you all were showing me some things that you were doing. So that has evolved since.
[12:26] Brian Spears: It has. In fact, that team has built out what you might call future proof antibodies. So the antibodies are designed for future variants of things we expect the virus to do, and those are being borne out in reality. So the virus will have a harder and harder time escaping and mutating to get away from the therapies that we have available. So that team is publishing their work furiously and working with private partners in the pharmaceutical world to understand how to bring this to bear for the good of others. And it's just really exciting. In fact, our small molecule production team has a cancer therapy that you may have seen is being announced, and is potentially going to be there for folks who have no other option in their treatments. So it's AI driven, fundamental science that's doing fantastic things for the world.
[13:23] Pamela Isom: Okay, so that's good to hear. That's good to know. So let's talk fast. Fast is the frontier AI for science, security and technology initiative. So I understand that the intent is to operate as a research and infrastructure development initiative with the goal of developing and deploying high value AI systems for science. I want to know how. What are we talking about here? Tell me more.
[13:51] Brian Spears: Well, Bas is going to do amazing things. It's a large scale whole of Department of energy effort to push AI for science to the frontiers of scale. What are the biggest, most transformational things we can do for science, for energy, for national security? And we're going to push those efforts in four major lanes. The first is in data. We're going to take advantage of the Department of Energy's largest physical science data repository on the planet. We're talking potentially exabytes of data. And for those who are not familiar with exabytes, let's just say thousands of trillions of pieces of information that we can use once we've got our data house in order. We're continuing to build out the computing capability of the Department of Energy. So, right now, we have the largest computing capability in the government and some of the largest computers on the planet, and we're going to build those out for pushing frontier AI models. So that's data and compute. The next one is the models themselves. We're going to train large, discipline specific foundation models in science disciplines across the spectrum. So for climate and environmental efforts, for national security efforts, for medicine and biology, for chemical discovery, you name it, we're imagining something like a dozen very large leadership class models to transform the way science is done. And with that core, the data, the compute and the models, we are then going to turn to critical applications across all of those discipline areas that I talked about and really offer transformations. So in the material discovery space will use those foundation models to build out a new particular molecule for, say, energy storage materials or radiation resistant materials that you can use in fusion reactors. There will just be hundreds to thousands of applications that we can use a dozen foundation models for. So that's the central pillars of the fast program, the data, the compute, the models and the applications. And it brings together all of the equities that only the Department of Energy, as you know well, can do for the nation. If we were, if we were reinventing the department, we would call the Department of Energy, the Department of science. And as such, we have the largest workforce of scientific folks in the western world, second only across the globe to the Chinese Academy of Sciences. And we are going to turn that workforce to bear on all the problems of national interest.
[16:37] Pamela Isom: Okay, so it sounds like then an underlying pillar of this. You mentioned data, compute the models and critical applications, but you also mentioned an underlying capability, which is the workforce and building up a capability not only to strengthen national security and strengthen our models and bringing all the information together, that is across the, the department and across the complex. But I also heard something around building up people and capacity. Can you talk some more about that?
[17:14] Brian Spears: Oh, yeah, absolutely. So everything that happens at large scale in the AI space takes, as I said in our lessons from fusion, a tremendous amount of effort. So there's the science effort in each of those pillars, and then there is the digital threading, the glue that sticks all of that together. The workforce that we're imagining across the Department of Energy needs to grow by something like 1000 people at a minimum, maybe 2000 people across the Department of Energy. This is enormous, but it's also something that's great for the country. We are building out new skills for people who do computer science and engineering, for people who interact in laboratory and production spaces with the technologies that are actually going to move equipment, are going to change the way that we do our jobs in the actual physical space all the way up to the research level. Folks like me who are going to have hands on keyboard trying to think about new algorithms and new ways to implement science. So this, I think, is a story of the way that AI capabilities are going to bring new opportunities. There are entirely new job functions that didn't exist before that need to be filled with, with people who can, who can think and operate with these skills.
[18:35] Pamela Isom: Now, when I think of everything that you've been saying, so I'm impressed and intrigued with fast. I think that that's a beautiful thing, and it's nice to hear you say that information that you're going to take it. Leverage. Leverage is the word I want to use. You're going to leverage the vast amounts of data that the department has. So one of my concerns when I was there is we have all this information, and there's a lot of insights in that information, from research and hypotheses that made it past the trls to products, to those hypotheses that didn't. There's just data that felt like could be tapped into. So to hear you say that you'll be utilizing that data to build and to define and refine new models, I think is a good thing, because data reuse is so important. One of the things that I want to know about is we've got the supercomputers. We've got so many supercomputers at DOE. Are we building another one, or are we looking at how we integrate the supercompute capacity to address the needs of the mission?
[19:58] Brian Spears: Well, a little bit of both. So the computers and capabilities that we have today are enormous. In fact, at Lawrence Livermore, just this year, in the next month or two, the first x scale computer in the National Nuclear Security Administration will come online here. It will be more than 30,000. GPU's, quite a bit more than 30,000. It turns out that machine is capable of doing both high precision scientific computing for simulation modeling and is also fantastic, given its GPU capabilities for doing machine learning and AI. So we will continue to build out capabilities like that. Part of fast is aimed at building out the hardware infrastructure that we need for pushing AI to the next level. So we will think about building out several more relatively large, concentrated collections of GPU driven computers using the fast initiative that you talked about. So we'll leverage the existing computational capability that we have. We can't get by without the high precision computing and high performance computing we're doing today. But we will use that as a data source and as a springboard to then drive AI specific computing that we can use to do lower precision, high frequency computation. That's needed in the AI space. That brings us to some critical issues in our own capabilities and on the national stage, then the global stage. So there is a huge demand for those GPU's, there's a huge demand for the electrical power that needs to drive those. This is a place where there's a global trend toward enormous scale. Meta, for example, has bought 600,000 equivalent GPU's for this current calendar year. And that puts them in a position to do AI at a scale that no one in the government is doing. Our job in the Department of Energy is to understand what the art of the possible looks like, and so fast is an effort for us to bring, as a public conscience effort, our understanding of what's there in AI and what we're capable of, into line with what's going on in the industry as well. So we'll push out our data centers, we'll scale them up, and we'll build our capabilities to understand exactly what's going on in the world. What are the threats and what are the opportunities?
[22:16] Pamela Isom: Does any of that. This may be something that we have to take, take to a different discussion, but does any of the work that you're doing address the concerns that industry has pertaining to energy efficiency? Is that a conversation we need to take offline, or can you talk about that today?
[22:36] Brian Spears: No, it's a great conversation. We are equally concerned about energy efficiency. We would like our computational systems to become more efficient. We have a strong record from the exascale computing project in Doe of working with vendor partners to make them, to make computers increasingly energy efficient. But there's still a challenge. I mentioned the demand that companies like meta have. That's a demand of infrastructure that is about ten to $15 billion of compute. That's how much an aircraft carrier costs. And the United States has eleven of those. We have companies who are moving nation state amounts of capital, with an attendant demand for power around inside the country to confer economic advantage, which is great. It also means that there's a scale of demand for this that the Department of Energy will have to understand and will have to keep up with. Part of fast is us building out our own internal capabilities so that we understand what's going on in that space. Part of it is building out robust public private partnerships so that we can help partner on increasing energy efficiency, understanding ways to make algorithms more efficient so that they reduce the demand for power, ways to make the hardware itself more efficient so that it reduces demand for power, and then ways to think about what are the critical and high priority problems that we should attack within the envelope of limited resources that well operate within.
[24:02] Pamela Isom: Yep. So the reason why I asked that is one of the use cases that I was going to propose, but you're there is for fast, is to look at that energy efficiency, the issues that we're having in society. So that's from an ethical perspective, which we always deal with, ethics and governance. And so that is one of the things that I was really considering. And so I'm glad that you are starting to take that into consideration, and that should be one of the best use cases. Right. So, because no one can, no one can tackle this problem like energy can, like the department can. So that's another use case that I'm glad to see added to that portfolio for fast. All right, so you have gone over the fast models and that capability, which I think is instrumental, how does that tie to your AI three initiative?
[24:55] Brian Spears: So, AI three, our AI innovation incubator, is deliver more effort to partner with industry on the way that we move forward. So I'm proud to work at a doe laboratory in the NNSA. We can do anything, but we can't do everything. So we will do it in partnership with our industry partners. There are capabilities that exist in industry today that are moving quickly in the AI space that we will not replicate inside DOE. So we will surf up to the forefront with our external partners. We will share problems. They will learn from the scale at which we work, and we'll learn from the pace and the capabilities that they have that allows us to bring from challenge problems, solutions back into the DoE missions and make the private partner better as well. So, an example of this includes fast, where we're thinking about frontier models at enormous scale for transforming science. We've recently met with a collection of tech industry partners across the AI software world and the computational hardware world to build out a picture of what it looks like to do this, not only for the good of Doe or for the good of the private partner, but for the good of the country. The end product of something, working on something like fast for transforming science is, I believe, lasting techno economic advantage for the United States. That's part of the promise that we're bringing with AI for science. And there's an element of peril hidden in there as well, which is that the way that the dynamics of AI are working, the entity or the nation that is slightly ahead is likely to take off quickly and be forever ahead. So the country that gets there first is the country that is going to lead in this for a very long time. From the us perspective, should we get there first? That looks like permanent or at least long lasting advantage for the United States in techno, economic and scientific capability. The alternative, if we don't lean in, is that we find ourselves behind the first position and that likely competitor is China, and then we find ourselves at permanent disadvantage. Should we not lean in and do this? So the way that we lean in and do this together is through partnership. It's Doe and Sas. It's our technology industry partners from the us economic ecosystem all working together against critical and emerging problems in science, energy and national security. So AI three, as you said, enters because we are strong at bridging with those private partners. So we bring them to the table and then we have really exciting conversations where we share with our commercial partners things they didn't know were capable in science, and they share with us things that we didn't understand they were capable of in the commercial AI space. It's really exciting.
[27:47] Pamela Isom: What's the best way for industry to engage with AI three? And fast, how do we get involved?
[27:55] Brian Spears: Oh, well, so AI three has an open call through the federal system for partners. Folks can just literally reach out. So look through the, we're listed, we announced through the federal register, and we're out there for an open opportunity call. Anybody who's interested can reach out to us. And if we can find a way that there are synergies, then we can, we can work together. Fast is still growing, but SAS is going to put out an open call as well for industry partners to say, hey, look, if you think you can help us build out science at a scale with AI that's never been seen before in history, and for us, advantage, then come join us.
[28:38] Pamela Isom: Okay, well, that's good to know. So I usually, at this point, we've been having a good discussion. I'm going to share with you an experience that I had recently, just to give you a sense of something that's going on. So I was working with, my husband is a fast driver, and so he had this app within his vehicle, and not within his vehicle, but on his phone pertaining to his vehicle. And I'm the one in the family that's more, more calm when I'm driving. And so they are studying our driving habits, they're studying our driving patterns. And I told him to, I called the company and said, I want to disable this capability. I don't like this capability number one, because it's making predictions. He's a good driver. He's just a, he's just a fast driver, right? And he doesn't have time to. Whatever. So even if so, there I am, on the opposite side of the spectrum. That's a bit more patient when I drive. And I don't like them having my information. I don't like them having my information. I don't like them in my business, Brian, I don't like. So we deactivated that service. But I also got suspicious. I deactivated it as well, because I got concerned that they would be. The idea is your rates, your. You get premium rates, but the concern is that it's too much information and predictions that we know not. We don't understand how these algorithms are working. We don't understand what they're doing. We don't understand how that information is going to be used. My request. And so, first of all, so, my request to you. I wanted to share that experience with you, but my request to you is, I know that Lawrence Livermore is more transparent. Even though we're dealing with national security matters, my encouragement to you is to keep on doing that, keep on explaining, like, how we're using these. I appreciate the fast, because I know where that's headed. We've got to be able to help people understand why and how information is being used. And don't use tools like that for the wrong purposes. And don't do the bait and switch. Right. That's bait and switch. Like, okay, you're going to get reduced premiums if you let us track you. But actually, no, because if they don't like your history, or whatever these algorithms are, our premiums were headed up. Right. And so I caught it and told my husband, no, nope, nope, nope. We're going to turn this off. We're going to deactivate this. So, first of all, what do you think about what I said? And then, second of all, I believe that Lawrence Livermore. No, I'm not biased. I just know Lawrence Livermore is being more transparent with their algorithms and with the models. And I need you to give me an example of how you're doing. So. And this goes to our ethical discussion. And then. So give me some perspectives there, and then we'll wrap up with your words of wisdom. What do you think about what I said?
[32:02] Brian Spears: Well, I. For your anecdote, I totally agree. I don't want to be tracked when I drive, either. It feels a little intrusive, and it doesn't feel fair because you don't exactly know what they're reasoning about. The thing that is concerning is the idea that a new model is looking at data where there hasn't been a lot before and is making extrapolations, which may be fair. Maybe I am making dangerous driving decisions, and maybe it's not, and it's not clear what is my recourse. How do I take advantage of this? Can I see in real time what's happening? Do I understand how I'm influencing these discussions? It takes people out of control of a situation that they want to have control in. I'll contrast that and say that's exactly the opposite of what we're doing in scientific AI. What we are working very hard, as you alluded to earlier, to do is to build models that give us predictions that we can test in the real world, that we're confident are going to give us results that we understand. And if we don't perfectly understand them, we have the process in place of going and doing simulations and experiments to verify and validate those models and help us understand what's going on. We also have a scientific workforce that we are not trying to take out of the problem. We're not trying to make decisions without understanding what's going on. We're trying for just the opposite. We are putting our scientific workforce back in control of the large amounts of data that we have so that we can define which experiments to do, we can define which simulations to do and get more understanding out of these models. While scientific AI may not yet be touching your life in the way that commercial AI is for monitoring, driving, the lessons that we're learning from scientific AI are applicable there. And it's one of the things that we think is the strength of things like fast. It's going to teach us to do things where we have nature as a guide so that we can build out AI for societal purposes that are better in the future. So everything you say resonates with me 100% in that story and the final parting thought, there is. We know those lessons so thoroughly that the foundation of science and security, or, sorry, of safety and security, that we're building our science AI on top of, is in the DNA. It's built from the ground up. So we will, as an example, we are relatively transparent. Our default position is to share what we build the way that the data that it's built on top of. We have an open data initiative. You can go to our AI dot llnl dot gov page, and you can go see what scientific datasets for AI training look like. You can find models that go with them. We will stop short, though. There are things we won't share when we think that it's going to allow people to do something that's inherently dangerous. So if we think a material or chemical discovery model is going to help someone do something dangerous in the biological space, we're not going to completely share that. Or if we think something that allows you to build out physics prediction capability might be good for developing weapon systems, then you won't see that either. It puts a big burden on us to make those decisions. But our default position, as you say, is openness and transparency. And then we only put the brakes on when we think transparency is at odds with safety.
[35:26] Pamela Isom: Right. I think openness and transparency is the way to go and the way that we have to go, and then as well as explainability. So I want to thank you for being on the show and joining me. And I would like to know before I depart here, do you have words of wisdom or experiences, any final words of wisdom or experiences that you'd like to share with the listeners that may help them in their digital transformation journey?
[35:53] Brian Spears: Oh, wow. Well, there are lots of lessons that we've learned over the last decade. I guess my fundamental lesson is just one about patience and persistence. Folks who are working in the AI space are, by definition doing something new. And there are plenty of pressures and incentives and people who are going to say, don't. Don't do something new, scary, it's maybe not beneficial, it's not the way that we've always done things. And I would just encourage persistence. If you have the vision and you can see what the benefit is, and if you're doing it the way that we've described, so that you're avoiding harm and looking toward producing good, then you just have to be persistent. I have developed the capability, working in AI and machine learning, of ramming my head into the same brick wall over and over and over again to get an idea through, and eventually it pays off. So for those who are facing the naysayers and the doubters, just get it done.
[37:00] Pamela Isom: I like that. So you said, be patient, be persistent, be hopeful, and stay hopeful. And then for leadership and our peers, provide, think about incentives and provide incentives to support the teams that are involved with AI strategy and solutioning. Because it is complicated. It's a complicated world. There's a lot of naysayers. There's something that my husband always says to me, first of all, he always tells me to tell, tell the Lord thank you. So it could be, I could be like, so frustrated. And he's like, tell the Lord thank you. Right? So there's that. But then there's this other thing that he says to me, which is, set your face like a flint. Like, like, stay focused. You know what you're going after, and you stay focused and you do that. So, yeah, you consider the things that are going on around you. Yes, but don't let that get you off course. And what you said is there's so much noise, and with all of the noise, you were saying to make sure we be patient and be persistent, the right persistence. And for those that are in our support structure, consider incentives to motivate us to keep going. That's what I know about you, too. That's what I know. That's the brand that I know. So I really want to thank you for the words of wisdom for the listeners and thank you for joining me on this show today. I appreciate it. And this has been very insightful. So much going on with the national lab in the scientific space that I think today's discussion helped us really understand what scientific AI is all about. Right. Using AI for science purposes. So that's a really good discussion to have. So thank you very much.