AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E011 - AI or Not - Tom Suder and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
What if technology could revolutionize government, industry, and academia all at once? Join us for a riveting conversation with Tom Suter, the visionary founder and president of ATARC and CEO of Alethia Labs. Tom, who likes to think of himself as a technology Sherpa, takes us through his incredible journey—from his early days inspired by an entrepreneurial uncle, to pioneering efforts in telecommunications at the dawn of the internet, and his innovative work in mobility and cloud computing. Discover how he established ATARC to bridge critical sectors, addressing complex tech issues like AI, cybersecurity, and quantum computing. Tom also shares the strategic evolution of ATARC, including its impactful partnership with GovExec.
Ever wondered how AI can impact everything from military operations to everyday tasks? This episode dives into the transformative power of AI, showcasing projects that range from computer vision for safety enhancements to the military's ambitious CJAT-C2 data synthesis project. We discuss how AI is improving flight scheduling, reducing bureaucratic paperwork, and making significant advancements in computer vision technology. The conversation underscores the practical benefits of AI, emphasizing both large-scale and smaller, yet impactful, applications that demonstrate AI's profound potential in solving real-world problems.
The future of AI in a free society is a crucial topic, and Tom provides a compelling contrast between responsible AI development and surveillance-heavy approaches like China's. We explore the importance of international collaboration to establish ethical AI practices and the efforts of the Department of Defense's Responsible AI Community of Interests. Finally, we discuss the transformative potential of technology for long-term productivity and prosperity, stressing the need for a balance between rapid technological advancements and human elements. Join us as we envision a sustainable future driven by innovative yet responsible tech practices.
This podcast is for informational purposes only. Personal views and opinions expressed by our podcast guests are their own and not legal advice, neither health tax, nor professional nor official statements by their organizations. Guest views may not be those of the host views may not be those of the host.
Pamela Isom:Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence and digital transformation journey. My name is Pamela Isom and I am your podcast host. We have another one of those special guests with us today. That's Tom Suter. Tom is founder and president of ATARC. He's founder and CEO of Alethia Labs. He refers to himself as the technology Sherpa, which I agree with, and Tom and I go back. We work together. We've had some fun together. I met him when I was in government and we've just stayed connected ever since. Tom, welcome to AI or Not.
Tom Suder:I'm so glad to be here, Pam, and it's always great to work with you and whatever guys.
Pamela Isom:We do go back a ways, yep. So tell me more about yourself, tell me more about your career journey, how you got to where you are today and it sounds like a lot, but it's not Just tell me about that career trajectory which leads into your entrepreneurship, and tell me why you're not bored.
Tom Suder:Yeah, I'm anything but bored. Well, I guess from the beginning I had an uncle that always owned a lot of different types of businesses and I followed his career and I really got this love of entrepreneurship and so I went to Virginia Tech, I was really involved with sports and I was planning on being in the military and that didn't work out for some health issues and I went back to what I thought I would be good at for business and I ended up working for a company and we did inside wiring and that grew. I ended up spinning that company out with one of the partners of that firm and we started doing nationwide installations for telecommunications.
Tom Suder:So this is like just think, the dawn of the internet. We started it when the internet came out and there was a massive demand for communication gear in the military and civilian agencies.
Tom Suder:So we built a business and it was really about a process of deploying teams of technicians around the globe, first in the United States and then around the globe, and I did that for about 17 years and I have to admit I got a little bit bored and it's when you have the challenge of you're almost I hate to say it, but you're almost hoping for like some kind of an event, like a hurricane or something where you can help out and do some really good work, doing 9-11 work and some other overseas work. I just maxed out on that. I wanted to do something else, so I got bought out of that company and then I really got into some different areas and included mobility. You know how can I do deployments out in the field with my? Now this is the beginning of the iPhones and the iPads.
Tom Suder:And then I realized that's really hard to have a new technology. I've got to worry about security of the device. I have to worry about security of the application. I have to worry about security of the application. I have to change workflow For the government. It's really hard. I had to go from a paper-based process to deploying tablets. What happens if people at the USDA that are doing survey work with the paper? And then I realized this company myself I can't do all that. So I started working with another association and gradually developed my own association, just because I don't like friction and bureaucracy and you have to work all these issues in tandem and I've seen it so many times. We've seen it with cloud computing, we've seen it iterate now with artificial intelligence and cybersecurity. It takes a village to pull some capability together.
Pamela Isom:Yeah. So I like the infrastructure discussion and how you're bringing that up, because that's not always cool to people but it's so necessary. And the fact that you were involved with it early on, from the early internet days, the telco days, the mobile computing days and involved in that whole transition, I think is wonderful and it gives you a good foundation for the organization that you're running today. So tell me more about these organizations. You've got two going on. Tell me more.
Tom Suder:Yeah, so about 10 years ago we started the Advanced Technology Academic Research Center and that's when we started our association together. But it brought government, industry and academia together to solve really hard problems, to work on these problems and really drill down. And, as I said before, we've taken on every emerging technology issue that's come up recently. For example, it was started mobility, then it went to cloud, then security, 5g, security, ai and now quantum. So we gradually expanded that and it was a vendor-funded 501c3. And so we get our funding from the vendors in each of these areas and really grew and expanded that. It was very exciting.
Tom Suder:But I got approached by the Department of Defense and some other agencies for working. Hey, we really like what you're doing with this ATARC as far as organizing working groups and we like that thought leadership. How could we work with you directly? And it started to get to be a little bit of a conflict. We're going to work with DOD and then vendors or members. It could be a conflict, especially if we're working in the area of procurement.
Tom Suder:So we ended up breaking the company into two parts and ATARC the name and a lot of the employees were doing the events that we were commonly known for they went over to the GovExec group and that transaction was about a year ago, june 1. And so that's been going great. I'm still the CEO Last I heard of it. They'll keep me, but it's been going really well. And what GovExec does is they do lots of bigger types of events. They do the Fed 100 and some big major awards they do, and they just have scale. They're a media organization. They probably got almost 400 employees and they bought 15 companies. That was a good place.
Tom Suder:The remaining entity and it's been renamed Aletheia Labs. Aletheia Labs is a lot of ex-Govies doing direct work with the DoD, and Aletheia means truth. So this company I actually got the idea about five years ago, for the name at least is AI. I'm thinking the explainability of these algorithms can be very, very important and I had thought of starting a company in the area. I just needed some technology from MIT that may possibly bring into the commercial world, but that never happened. I like the name and I think that it's a good, solid name, especially when we're doing work around AI, and so the name stuck. So we started that about a year ago.
Pamela Isom:And Aletheia means truth, you said.
Tom Suder:In Greek yes, Wow, wow.
Pamela Isom:And I know that you are teaching. I know when I was working with your teams, we were involved with AI literacy. We were trying to get to risk mitigation. We were trying to help folks understand how to mitigate bias, mitigate risk, as well as see some of the good things and some of the positive sides and aspects of AI, particularly for the Department of Defense. Let's talk about that some more.
Tom Suder:Yeah.
Pamela Isom:What's happening with AI literacy? Where do you see it going?
Tom Suder:And I think it's only going to be just bigger and bigger and bigger. And, as you know, I know you're involved a lot with STEM events and I really think it has to start from the middle school. It's just like what we've been talking about with cybersecurity, so we've been somewhat involved with that. I like what the University of Florida is doing. They're basically making an elective the mandatory elective where you have to learn about artificial intelligence and that really does translate, and they have a supercomputer there called the Piper Gator. They've basically taken this is very important to the workforce of the future and the DOD is taking a lot of steps, as well as DHS, department of Homeland Security and some other agencies.
Tom Suder:How do I educate the workforce? And there's different aspects of the workforce. You need to have a general idea about AI. You need to have some general knowledge. You don't want to be completely uninformed, but then the training that we've done has been super interesting. It's around acquisition officers, and I'll tell you what acquisition officers have the hardest job in the world. They really have to understand the technology to a degree and they're always reaching this. New technology comes to them, and how do you procure it? It's very difficult and that has been a treat. We've gone around to many different units in the Army, the Navy and the Air Force, as well as Space Force. That's been interesting. I do think it needs to scale to a degree. We were working with Defense Acquisition University and ATARP for zero trust security.
Tom Suder:I'll tell you what. It was very interesting. We had 3,500 people on a two-day webinar and most of them were acquisition officers. So I think we need to build a community. I know we talk a lot about AI. Every event that we do around security, we talk about AI for security. Large language models for threat intelligence.
Pamela Isom:And every.
Tom Suder:AI webinar event has that security, which I think this is a really good thing. The AI and security have gone like this Absolutely it's necessary.
Pamela Isom:Absolutely. I agree with you 100%. I do not talk about AI. It's very rare that I talk about AI and I don't think about how to secure the AI solutions. When we first started talking in industry about shift left, remember back in the day when everything was about shift left? So see, now shift left has really come to reality and everybody calls it by design. But when you think about emerging tech and when we think about AI and any other technologies, you have to think about securing the solutions up front. It is too late to wait until a product is deployed to think about securing those solutions. So thank goodness for the folks who raised the flag around shift left, we caught it and we have integrated that into our programs. And for me, my business model is the same. I don't discuss AI without discussing securing the solutions and protecting one's privacy. So I agree with you on that.
Pamela Isom:I still want to go back to literacy a little bit more. We talked about where we see it evolving. I want to talk some more about bets. So let's talk about little bets and big bets, because we were talking about that a little bit before we started the podcast. But let's dig some more into where we see the big bets. Let's start with that one first. You started to touch on it. I will lead in with an example.
Pamela Isom:This week I was looking into some AI solutions that I thought would be really valuable in on-prem energy. So we had always talked when I was there about using computer vision to understand the health of the transmission lines, and so now they're really starting to do that in California and there is some work that's underway and they're bringing that more and more out to fruition and I think that that's a really good big bet that's going to cause safety for human beings, because we've had so many issues around our energy to those substations and just not so good behaviors by others shooting at people and just shooting at the infrastructure all kinds of things and also some of those infrastructure lines are not in the safest area. Why would I want to have a person climbing up a pole to inspect the health of a transmission line when we can use computer vision to look at that and some of the generative AI features to make predictions? So I saw that going on. To me that's a big bet that is starting to become real and I was excited about it. But let's talk some more about some big bets.
Pamela Isom:What you got.
Tom Suder:Yeah, I always think of big AI like these big grand slam things.
Pamela Isom:One I saw, and I don't even know if it's a big bet.
Tom Suder:But Captain Brian Erickson over at the Coast Guard, he's about ready to retire. He's going to be actually on the presidential transition kind of working the IT issues. Great person, great team. He spoke at one of my events about using computer vision to pick out. Just pick out like a boat or a person floating around in the hot water with high seas. Just think of the perfect storm. You're missing this boat in andrea gale and it's out in the middle of nowhere and you've got to peek out of your helicopter and driving rainstorm and maybe you pick up something. Maybe you don.
Tom Suder:So they're really looking at doing computer vision, having it on the helicopter and trying to really pick out that little object. That's just a little bit different To me. That saves lives. There's literally no downside. It's the best case scenario and I think it will save lives when they really get it implemented. But that to me is like a big bet and once you train the algorithm it's something that you can actually deploy. It's not like science fiction, it can actually be done and I loved your example. And I think the biggest thing the military is working on now is that CJAT-C2. It's basically like bringing in all the data. It's a hard enough concept to bring in all the data you can to that warfighter right in the front, but also there's so much data.
Tom Suder:You need to make sense of it. So what do you need? You need to get access to it really quickly. You need to synthesize that data. So the idea of using artificial large language models over that, that's a big bet.
Tom Suder:That means you're doing your work than you've ever done it before you have access to all the data that you can get into, and I really like what Frank individually I don't know if you know Frank, but he's the CEO of NOAA and he really talks about large language models as having a conversation with your data. I really like what he puts out.
Pamela Isom:I haven't had a conversation with him yet, but I probably will when I think about the large language models and I think about the data. If I go back to AI literacy, one of the things that I really want to see more of and I'm probably going to do this with my training programs too I don't know if you are yet, but I know it's coming so, with those large language models, the whole concept of tokenization. We need to dig into that deeper. So we need to understand what makes up a token, what makes up a sentence, what makes words understandable and translatable. I don't think there's enough understanding around that, which has everything to do with generative AI and how we are going to make it more reliable. So I wanted to point that out, because that's a key to a large language model is how do we make sure that we understand and transform the data properly. So that's something that I want to see more about. I guess a little bit more technical, but something that I think we should be probing into.
Pamela Isom:I want to go to little bets. So what do you see as some little bets?
Tom Suder:I went to. This is a couple of years ago. I went to the Naval Summit, the Naval IT Summit, and I was in a booth. I don't normally do booth duty but it's kind of fun to be out with the folks in the booth and on the floor. And this was over at the Gaylord in Maryland and I'm ready for AI. I am so psyched about this. I think everybody's going to ask me about AI and really all they came to us was how can I do my flight schedules? Right now I'm on a laptop and I can't get access to the system that towards your day off and I'm trying to schedule training for my carrier group. There's lots of little things like that.
Tom Suder:My son, who's actually stationed in Germany. He's a lieutenant in the army. Just the amount of paperwork, the paper drill you have to do. You don't really have as much time to do your job, your real job of why you're in the army. You're a signal officer. You're supposed to be doing that kind of work. It's a lot of paperwork. It's a lot of bureaucracy. Where is it in this process? And some of this isn't AI. Some of this is probably just have a business process management system with a little bit of AI built on it.
Tom Suder:But I think it's the military and you really ought to break down a person's job when are they taking the time and then pilot that and really try to get productivity. I don't know about you. I played with ChatGBT just for my work and my personal life. It's a real time saver. You don't have to think of everything from scratch. It's not the answer all the time, but it's a helpful aid and then I can put my original thoughts on top of it, but it really helps me. I've got a Flaubert quote that is be regular and orderly in your life so you can be violent and creative in your work. Basically is what the quote is. So I don't want to spend time on bureaucracy and paperwork. I want to spend time on the next big idea where I'm trying to do an ATARC or a Lethealabs.
Pamela Isom:I like that. I agree with you. I like the tools, the generative AI tools, and AI was popular before generative AI sparked its head up. But what makes generative AI so popular is what I always tell my clients is generative became so popular because you now have, you are providing inputs. You are providing inputs, you are in control to some extent to what you will get back. You feel like it it's still a black box, but you're more in control, not like traditional machine learning, which is behind the scenes anyway, but it's not like that because you're feeling more empowered that's the word I want to use so you just feel more empowered. So we like being able to come up with creative prompts or tell it to create imagery. We like that because we feel empowered and so they did a good thing by that. That's very alluring, because people like to feel empowered and we like to be empowered.
Pamela Isom:I use generative AI and AI tools to help me with administrative tasks, so I don't have time to do a whole bunch of research, so I rely on AI to jumpstart research for me, but I'm always in there. I'm always in there when I get the information back. So me, greg and I were talking. Greg Sisson, my colleague from Energy and we were talking the other day about how we could consider using AI to generate policies, the baseline policies, because we were discussing security and everything and we were like we might want to consider using it to create at least some baseline policies and then we use that to evolve it. Now, for me, I like that idea. Some may not, but I like that idea because I know that we have to have that second set of eyes that's going to go in there now and cultivate the policies. Do you remember how you can get the writer's block? And so, if you think about it, yeah, so this helps with all of that and you get writer's block when it comes to the policy creation.
Tom Suder:It's brutal work. What if you just ingested all the policies in the department over time? Now you're training on that. That's pretty powerful, because I can come out in Department of Energy speak. Yeah, I think this is exactly what you should be doing. I was actually just thinking of what it was like before and what it is now.
Tom Suder:And I went to GSA and I thought this was the best idea I've ever had. What if you had a FAR app so you could ask a question? It would be a mobile app. And of course, IBM just came off of Jeopardy where they beat Ken Jennings and I was just all hyped about it then. But I came into a meeting with IBM and they had 10 people at the meeting. They needed 14 PhDs I'm exaggerating, but 14 PhDs to design the system for the FAR. Now all I have to do is throw the FAR in there. It'll understand it to a degree and hopefully I could train it enough where I could say hey, I'm taking out a government person for lunch, what am I allowed to spend on them? It would be so much easier now. Because of this infrastructure and this large language models, the barrier for entry is rather low.
Pamela Isom:Yep, and that's a good use case. And the reason why that's a good use case is because you have a document that's already blessed. You have a document that's already blessed, which means you have data that's already approved, that's already authorized, that's already approved. So now your generative AI is responding based on that good data. So go back to your conversation earlier around how the data must be good. So go back to your conversation earlier around how the data must be good.
Pamela Isom:So that's an example of the FAR, which has tons of data. It really does. It's a big document and many of these regulations that have been authorized and approved, executive orders, all of those if you take all of those, pull those together, you have an LLM of information that has been blessed, authorized information. So that's an example of just us thinking on the fly of a government LLM with authorized and approved data sources. So that's how you do it.
Pamela Isom:So, as these organizations are thinking about their data, going back to your conversation around data, you have to think about your sources of the data and the reliability of the data and don't overthink it. So organizations sometimes tend to ask customers for too much information, so they have so much information in their environment and then the models are trying to use it. So now you have to think about data minimization, categorization, classification and go back to some of the simple things like look at the FAR. It's a reliable, trusted data source. Use that as an example, as you're coming up with a model for how you want to source your LLMs. So that was a good point.
Tom Suder:I totally agree with that and I think that's where the government gets a little confused. I think that when you're training on public data that's out in the wild, you have a lot better chance for hallucination. It's still okay in certain instances, but if you want to train your data, that's a lot better chance for hallucinations.
Pamela Isom:It's still okay in certain instances but if you want to train your data, that's a lot better.
Tom Suder:That's what the corporations are headed to. They are taking their data fees and putting LLMs on them.
Pamela Isom:And it's not 100%.
Tom Suder:And there still can be some hallucinations. I think, from what I understand, your risk is a lot lowered. I think they're. From what I understand, that's a lot. Your risk is a lot lowered. And then when you're reaching the public internet, a lot more can happen in the wild. So I think they could use all their data. What if they had all their data tied together and you could search over all their data and make conclusions?
Pamela Isom:That's really the holy grail. It's the holy grail, but they still have to look at it, they still have to examine it, they still have to make sure that, because humans are still in there. So you still have to look at it, you still have to make sure it's reliable, but at least the foundational principles behind it is there, because it's rules that the government follows. So I agree with that. So we glance over it, but making some key points here, because that's how it's done and, again, organizations don't need to overthink it. We need to look at it from the simplest perspective, which is what we were talking about with the big and the little bets and the example that we came up with. In the end, there's somewhere in between with the FAR. That's a powerful solution, so I know they're doing it, but we're reiterating that point. Okay. So we talked about how AI is assisting you and you gave some really good examples.
Pamela Isom:I have something that I don't know if you've heard about it, but chat GPT-4-0 is out. I don't know anything about it yet other than the fact that they're trying to democratize AI. It seems to be more on the invasive side, more on the surveillance and invasive side of the house. So I want to keep a close eye on it. I have not tried to use it yet, I don't even know if it's available, but I know it's coming soon. Have you been curious about it? What's your take on it?
Tom Suder:I know it's a danger using the technology in a way like that. I think we want a free and open society. We don't want to be China China the best thing that they have the biggest advantage they have over us in AI right now is they're really good at spying on their people. We don't want to be that. We want to be free and open and we don't want our data cultivated like that. It'll probably be some legislation at some point, but we really need to understand it and what are the boundaries, and I think that a lot of that is really we need to work internationally on that. A lot of that is really we need to work internationally on that with NATO countries and the Far East. The AUKUS alliance we have, I think I saw today South Korea and Japan are going to be part of that as well. Added to that, I think we need to really work with our partners on what's the real good use of AI and what's fair use of AI and what's fair. And we just don't want our data cultivated and nobody wants that. We're in the United States.
Tom Suder:In China, they want to do that. They want to look at their people so they can't get out of line. They can't even think it's 1984 over there. It really is an advanced 1984 type of system. We don't want that. I know there's a lot of press around. That's what they're worried about. They're worried about government taking this data. Now I understand the person. I can leverage them. I keep an eye on it. It's the civil liberties part of me.
Pamela Isom:Yeah, and the ethical part as well, yeah, so that's starting to really shine through. All right, so I think I'm pretty much set with what we have been talking about today. I want to know before we wrap up here is there anything else that you wanted to discuss? So, before I ask you for your words of wisdom, yeah, I think responsible AI.
Tom Suder:we are running a group for the Department of Defense. They've really been since they formed the Chief Digital Artificial Intelligence Office and, before that, the JAIC. They've really wanted to be at the forefront of this. I think it's very important. A lot of agencies are right there and I think it's almost like what you said earlier with cyber, which I think is right you want to roll that responsible AI piece from the beginning. You don't want to bolt on at the end.
Tom Suder:So I think, that's important and we're running that group. It's getting all comers from academia. Dod civilian agencies that's going really well. What's the name of the group? Dod civilian agencies. That's going really well. What's the name of the group? It's the DoD Responsible AI Community of Interests.
Pamela Isom:Okay.
Tom Suder:You can Google it and sign up. It's right there.
Pamela Isom:Okay, all right, so that's good, that's not a problem. I think that that's a good thing, and I think there's always opportunities for us to learn how to use AI more responsible and exchange opinions and facts so that we are learning from one another. So no issues there. And then I'm going to ask you at this point do you have any words of wisdom or experiences that you'd like to share with the listeners?
Tom Suder:Yeah, I tend to not come up with any original material myself, but I put these quotes up that I think are pretty important. So Roy Amero is an American physicist, he's a futurist president of the Institute of the Future, and he has a law. And I think it's really important, especially when it comes to AI. And his quote is we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
Tom Suder:So I think we're going to be in the hype cycle of AI for a while and pretty soon quantum, but I think it's going to have a lot of uses in the future and it's going to get more and more pervasive and I think it'll really change the way we do business here in the world. So I'm optimistic too. I don't think the Terminator is coming anytime soon. I think it'll really help alleviate some of the problems that we have in our society ages. It'd be nice to have a robot that can come in and help you out when I'm 90. Hopefully not anytime soon, but I think there's a lot of advantages. Technology has helped the world so much we thought we were going to starve in the 70s.
Tom Suder:It turns out crop yields go up eight times in 70 years. So I think technology can really help us. It'll be a little overhyped in the beginning the Gartner hype cycle but I think as you fundamentally get some basic capabilities it'll help on productivity and make us a more prosperous world over the long haul. I'm excited.
Pamela Isom:I think that's really good. So you said we tend to overestimate the effect of technology in the short run and underestimate in the long run. Was that your quote? Yes, okay. So I tend to agree with that, because we don't think about the long run. So we're thinking about the short run and we're thinking about what's happening immediately, right now. That's the human nature. And so then, for the long run, that's where you and I come along to help organizations and to help people start to think about the longer term implications, so that the decisions that they make are sustainable. So I think that that's really good.
Pamela Isom:We're in the day and time of digital disruption. Digital transformation started a while ago, but it isn't going to stop At this point. It's a matter of how do we address and deal with all the change that's coming with not just technology, but with the human element as well, and so thinking about things like what you just mentioned here is great words of wisdom for folks to start to lay out their plans.
Pamela Isom:And I mean you don't have to plan everything, but think about the short and the long term and think about sustainability. So I really appreciate talking to you today. It's been a minute, but always good to see you. You are a great friend and I really enjoyed working with you, so let's stay in touch with one another and thank you so much for being a guest on the show.
Tom Suder:Thanks for having me. I feel honored. It's great.