
From left to right: Lane Dilg ’99, Thompson Paine ’05, and Marina Chase Carreker ’03 (photo by Leon Godwin)
This episode is a recording from the 2025 Alumni Forum of a panel entitled “Making Sense of AI, and the Revolution Reshaping How We Think, Work, Learn, and Relate.”
The panel was moderated by Marina Chase Carreker ’03, founder of Galleon Strategies. Joining her were Lane Dilg ’99, former head of infrastructure policy and partnerships at OpenAI, and Thompson Paine ’05, head of product strategy and operations at Anthropic.

Music credits
The episode’s intro song is by scholar Scott Hallyburton ‘22, guitarist of the band South of the Soul.
How to listen
On your mobile device, you can listen and subscribe to Catalyze on Apple Podcasts or Spotify. For any other podcast app, you can find the show using our RSS feed. You can let us know what you thought of the episode by finding us on social media @moreheadcain or you can email us at communications@moreheadcain.org.
Episode Transcription
(Marina)
All right. Thank you so much. Welcome to the AI Panel, the topic of topics for 2025. It’s coming up everywhere, and now we have forty-five minutes to really get into it. We’re here to discuss, I think, what many people agree is likely to be the most profound technology shift in the history of humankind. I think we’re all grappling with how it is going to affect us, our kids, our careers, our businesses, our relationships, the educational institutions we love. So we’re going to talk about all those big meaty topics. We do not have much time to do it, so I’m going to do very quick introductions. I will ask a few questions, and then we will turn it over to you all. The one ground rule is to please keep your questions brief. This is not the time to tell us how ChatGPT planned your trip to Italy. We want to take full advantage of the expertise. Let me send the panel up here. So first, I’m going to introduce Lane Dilg. Lane advises leaders and investors across the full stack of AI, from infrastructure and energy systems to AI-native applications in health, law, and defense, as well as AI adoption across public and private sectors. Previously, Lane was an executive at OpenAI, where she served as the head of strategic initiatives and the head of infrastructure policy and partnerships. She helped guide OpenAI’s growth from a 440-person team to the world’s best-known AI firm. Before joining OpenAI, Lane held senior roles, counseling White House officials, US senators, federal judges, and C-suite executives. In local government, she served as both a city attorney and city manager, including leading a team of fourteen hundred during the 2020 pandemic. In addition to UNC, Lane is a graduate of Harvard Divinity School and Yale Law School. Next up, we have Thompson Paine. Thompson leads product strategy and operations for Anthropic, a leading AI lab with safety research and deployment at scale as its mission. Before that, he led and scaled multiple product and business functions at Stripe and Quizlet. Thompson is a member of the adjunct faculty at Vanderbilt Law and previously served in the political section of the US Embassy in Beijing, after majoring in political science and Asian studies here at UNC. He wanted me to mention that he won the coveted intramural T-shirt for bowling because he was worried he didn’t have anything impressive in his resume. At least he’s good at something. He got a JD and MBA at Stanford and studied at the Peking University Law School in Beijing. My name is Marina Chase Carreker. I run a boutique advisory firm helping business leaders navigate AI adoption. Before that, I was president of a venture-backed AI company where I saw firsthand what a difficult time businesses are having adopting this technology responsibly. Not to be outdone by the other two lawyers on this panel, I also used to practice law. I spent the first twelve years of my career representing technology companies in trade secret litigation, and then before moving in-house at a software company. Here at UNC, I was a history major, so you can see that my career path to AI was as straight as an arrow. And in addition to my current advisory work, I also serve as an affiliate faculty member here at the business school and was very honored to be appointed last month by Governor Josh Stein to North Carolina’s AI Leadership Council. That is your panel. Like I said, I’m going to ask a couple of questions, and then we will turn it over to you all. The first question is, there is a tendency to be either wildly optimistic or deeply anxious about AI. And I think for those of us who are working in this space, often you feel that all at once in every single day. So as people who have spent considerable time thinking about this, where are each of you on the spectrum? Where do you honestly sit on that spectrum right now, and what drives that perspective?
(Lane)
It is so nice to be here today in the Carolina Union. So just want to just express how great it is to see all of your faces and to be in Chapel Hill. I think when I look at AI and the time that I’ve spent over the last couple of years, I am wildly optimistic. That is why I went to OpenAI. Having been in government, having worked at the Department of Energy, including on climate, I believe we need a step change, and I believe this technology has the potential to transform scientific discovery, health, even engineering, in ways that not only will be very beneficial but also at this point are necessary. I think we need this technology, and I think we can use it to solve some of our largest challenges. That said, I also do recognize the risks. The thing I would encourage people is really to think through—there’s a lot of talk about anxiety in AI, and anxiety can be paralyzing, and it is really important, particularly for the leaders in this room, I would encourage you not to be paralyzed. There are things that we are actually fearful of. AI that is not aligned, which is not actually top of my concerns, would be a thing that we should be fearful of, which is why we have people working every day, some of the smartest people in the world, working on the technical challenges of, is that actually a possibility? How would it present itself? And what are the mitigations and moderations that we need to actually ensure that we do not end up in a situation where we have AI that is unaligned to human intentions? More specifically, somebody coming from government, I’m a former federal prosecutor. Do I fear what people can do with AI? Absolutely. Should we all be attentive to the things we need to do? Open-source models, how are we actually looking at those? Where are they available? How are we looking at our criminal justice system? How do we keep up with actually understanding how people are using these tools and what actual materials in the world—biology, chemistry, et cetera—they have access to in conjunction with the tools? Those are things we need to be focused on every day. As all of us enter the AI space, I would say, absolutely, wildly optimistic. This is something we should all be working to move forward, and many of us all the time should also be working on those risks with equal intensity and equal thoughtfulness.
(Thompson)
I would second a lot of what Lane said there, many of those points. Before I jump into what I’m optimistic and excited about and what I have some concerns about as we look to the future, I’ll say two things. One, when people talk about the future with regard to AI and make predictions, it’s very uncertain. Most of us don’t know. Probably all of us don’t know what is in store. And so I always say that anything I say about what’s coming, assume it is with very high humility and very low conviction. And I should probably use that as a preface for anything I say today. The second thing that I think is key to mention in any discussion of AI, and we’ll probably color a number of our answers up here, is a concept of scaling laws. And scaling laws is a concept that was published, I believe, first in a paper in January 2020 out of OpenAI, but a number of the authors ended up being founders of Anthropic, where I work. And it was this idea that even without targeted innovations in how AI is made, if you just throw more compute and more data at these models, so basically, if you just have a lot of money to buy servers and buy data, the large language models, the AI, gets more intelligent. And so this is what triggered this arms race between the big labs like Google, eventually Anthropic, which was founded in 2021, OpenAI, and Meta, where they just start flooding capital into the space because with more compute and more data, they just can make increasingly intelligent AI. And ChatGPT was probably the next firing gun, which just fueled this arms race that you saw to get more capital for more compute, for more data, for ever more intelligent AI. And we haven’t seen a ceiling on these scaling laws yet. So why do I say that? To preface what I’m excited about and what I’m anxious about. And I really like Lane’s point about this being less of an emotional anxiety. And I think we often try to approach these risks empirically to really understand what they are. But what am I excited about? One, with this explosion of intelligence, I’m very excited about scientific discovery across the board—healthcare, health sciences, space travel, you name it. I think AI is going to be very exciting in scientific discovery. Two, public sector. I’m actually very excited about the public sector, just civilian technology improving with AI and civilian public services being much, much more easy for the average citizen to access. And then three, I would say, actually things like education. I think education is a huge area of risk and opportunity. On the risk side, there’s three things as well. The first is job dislocation. We often talk about the future when job dislocation comes up as a binary black-or-white thing: it will all be great, or it will all be terrible. Wherever it lands, there’s going to be a transition period for where this shakes out. And I think there’s a question of who is impacted in that transition period, how many people, how badly, and for how long. If you think about the China shock that hit factory towns in places like North Carolina in the early 2000s, we think that permanently dislocated about five to seven million jobs, permanently, mostly blue-collar people in small towns. Now, imagine if there’s an AI shock that impacts tens of millions of people in middle- and upper-class communities, suburban communities, urban communities across the country. You’d see some pretty significant political, societal, economic ramifications of that. The second is geopolitical. I think we already see an increasingly fragile international order. And history tells us when there are large technological revolutions, and some countries adopt those technologies faster than others, that leads to new balances of power and upends international order. I think that’s an interesting area to watch. And then lastly, back to the concept of scaling laws, it’s just speed. This is going to happen really fast. And so that compounds any impact we see, positive or negative, in the coming years.
(Marina)
Thank you. Many of us in this room are now tasked with raising the first generation of AI-native kids. And I know both of you are parents. So I’m curious how, as close as you are to this technology, how you are thinking about in your own homes—what will be the practices and the rules? How are your children—how are you helping your kids learn to interact with this based on the perspectives that you have professionally?
(Lane)
Yeah, I think obviously this question is so close to so many of our hearts. I have an eleven-year-old. He does not have ChatGPT or any other AI app on his iPad. His iPad is not allowed upstairs. And I really approach it the same way I’m approaching all tech products, which is to try to understand the product, try to understand my child, and then make rules together with my community. So I’m in very close touch with his friend’s parents to make sure that we are presenting in certain ways a united front and really thinking through each product, whether it’s Minecraft, Roblox, Instagram, anything else—what are the risks? How are we really addressing those? But also kids need clear structure. So our clear rule is your iPad does not go upstairs, which means we have eyes at all times on what’s happening. I do let him use ChatGPT and now Claude and other apps, but with me. So I often will ask questions and enable him to ask questions so that he is understanding how this works, and he is learning about it along the way. I do encourage his school certainly to do the same. I would be concerned in this moment about anyone who is hiding their head in the sand on the education front, because I do think that using the tools more makes you a better user of them and helps you to understand the trajectory of the technology. I have found that to be true even with my son’s questions, which started with, “Can I use ChatGPT to write vows because we’re going to marry two of the neighborhood dogs?” And the answer in 2024 was, “That’s a pretty good use case, sure.” But he himself has figured out that it can help him with other things as well. And I think that learning progression is important but very well supervised.
(Thompson)
I’ve done a couple of panels with educators on AI, and I build it up, all these opportunities, all these risks, and there’s a slide that says, “So what’s next?” And the next slide says, “I have no idea.” Usually gets a mix of sighs and laughter. But I think my kids are—I have three girls under five, so I haven’t had to cross many of these thresholds yet. But when talking to educators, my general thought or advice is I think you need to go on offense versus on defense. And I see a lot of defense being played right now. “Oh, my God, this is scary. Oh, my God, there’s so much cheating,” which I think is a real problem. “How do I defend against this thing that’s coming at us?” And on the offensive side, I think of that in two respects. One, I have a friend who’s a professor at Georgetown, and he makes his students write a paper, and then they book twenty minutes to come defend or discuss that paper with them in his office. And so he gets a chance to see, did they really write the paper? Did they really grok the subject and show what they can do? And again, I don’t think that’s just a defensive play in, “Are they cheating?” He’s really putting them on the spot to show they’ve learned the material, which I love. The other way I think about offense in education is what are the timeless—AI is a tool, and we’re all going to be using it. And if you don’t use it, and you don’t teach your kids how to use it, you don’t teach your students how to use it, they’re going to be left behind. And so how do you go on offense to have assignments that require them to use AI, have activities for your children that require them to use AI, but also be focused on the timeless skills of communication, leadership, and problem-solving that will help them figure out how to use these tools to do great things, regardless of where the economy takes them.
(Marina)
Okay, I’m going to ask one more, and then we’ll get to the audience. We are about three years into this gigantic generative AI experiment. If we had been having this conversation a year ago, how do you think it would be different than the conversation we’re having today? In other words, what’s happened? What have we learned in the last year that has shifted your perspective of what the trajectory of all this is?
(Thompson)
I have to start. There’s probably a lot. I mean, this space is just moving so quickly. I’ll throw two things out. The first is there’s a term many of you have probably heard, if you follow the space, as agents or agentic AI, which people have been talking about for several years. But this idea that you’re not just prompting a bot to give you a response, but the bot is going off and doing work for you. So AI is now doing hours of work for people on its own and then coming back to show the work it has done. And that has really exploded with coding, specifically, but is now reaching outside of software engineering coding as well. I’d say that really blew up—I mean, early this year, Q1, Q2 this year—and it’s just dramatically accelerating. And the second—I don’t really have a phrase for this, but I often describe AI and large language models as an engine that is being built, and it’s a new engine with new capabilities. And what product and commercial teams do is they build a vehicle around this engine. And so you’re not just taking the new engine, this new magic, and shoving it in your old Volvo. You’re building entirely new vehicles, flying planes, whatever, out of these new engines because it has these new capabilities. And what I think you’re going to start seeing is the engine is going to start merging with the vehicle and building its own vehicles. So you can think of when the AI was less capable, we would build a box around it that structured how you were going to interface with it. Increasingly, that AI is just going to be coming out of the box and building its own box and doing things that you didn’t—you gave it a problem to solve, and it found ways to solve that that you couldn’t have structured yourself. And so I think that merging of the engine with the vehicle, if you will, we’re starting to see signs of that, and I think that’s going to progress quite quickly.
(Lane)
I think that all makes great sense. I think the things I would add: even one or two years ago, when I was in conversation with policymakers, with leaders, or in conversations like this about AI, many people actually were not using the tools regularly, and that meant the conversation could be very mixed in understanding. I think now, as I talk to people, people have varying levels of confidence in their understanding of the technology or their understanding of how it can or should be used. But people are further along simply in testing it themselves, in actually using the technology day to day. And I think that gives everyone a much better lens into how it matters for their own life and work, and it makes it a much more participatory conversation. When I went to OpenAI, I had someone say, “Oh, I didn’t know you were an AI person.” And I thought, “Well, we’re all going to be AI people faster than we realize it.” I think particularly for policy, it’s really important that that happen as quickly as possible, and I think it is happening quickly. I think everyone in this room is now contributing to this conversation through your own expertise, and that is really essential for where we’re going. I think the other piece, I think the agentic systems comment is a really wise one. I actually downloaded a product I hadn’t heard before this morning that is effectively agentic systems for small businesses—just for me, it was a free product in beta that already has agents creating other agents for small businesses, a simple free download. So less expensive, very, very approachable, requires almost nothing from me, and may or may not be able to do some of what the enterprise business products from OpenAI and/or Anthropic can do. So really that ability to use agents in your daily life. The last thing I would say is I did work quite a bit in health back in 2023, 2024, and at that time, there were always people at the table who would say, “Well, but it gets all the answers wrong, right? Hallucinations. It can never be reliable because it gets answers wrong.” And that is a very real concern. But I think people are starting to understand that that also really is a technical problem, and it’s one that agents are really poised to make tremendous progress on. ChatGPT out of the box at a GPT-5 level is going to be a much better model. It’s much easier to use, it’s much more accurate, et cetera. But now people are starting to understand that for your discipline, for your work, particularly if you’re in health or sciences, you really are talking about a variety of technical interventions to drive accuracy and reliability. And those interventions really do work, and we are starting to see them work. And that means that you can get more and more deployment in some of these spaces where reliability and accuracy are critically important. Great.
(Marina)
Thank you. Okay, let’s open it up. Lots of questions. I don’t want to be in charge of picking them. How do we do? I give you my microphone and you do all the hard work? Okay.
(Lane)
It’s on right now.
(Seshank)
I got the mic first, so I guess I’ll start. Hi, my name is Seshank Gandha. I’m a senior here. I’m studying neuroscience and computer science, and I’m interested in the intersection between AI, machine learning, and healthcare, both for research, but also really the last point that you’re touching on, applications and workflow for clinics. One of the things that I saw in talking to some physicians is that they don’t trust using AI to improve their workflow in the clinic, partially because of the hallucination thing, but also from a data security standpoint. So I guess my question to you is, what steps can or should AI companies take in order to build that trust for physicians or for patients who are saying, “Hey, I’m going to a doctor who’s using AI and not really using his brain”?
(Lane)
I guess I’ll start with a general comment. I think one of the things I’ve learned most in tech is to trust the process of iteration and really to trust that things don’t come out of the box done. As a lawyer, I was always trained, everything needed to be perfect. I would say that building trust in healthcare is in part an iterative process. It’s going in and really working with the client and really helping them to understand the technology. It’s testing, it’s evals, it’s really driving the results that they need to see to trust the product. I think on the privacy side, and that’s been a conversation I’ve had with many of you today, I think there is work to be done, both to make sure that we have the laws and regulations in place that we need and that people trust them. I think healthcare systems are approaching that in some of the same ways that they’ve approached other technology that’s been integrated into the healthcare systems, but it is constant and very hard work on their side to build that trust, not just with you, but with patients.
(Thompson)
Yeah, I’d add, has anyone heard of the book Crossing the Chasm? It’s a really common or popular business book. And the idea is that when new technologies are introduced into the market, it has a quick jump in adoption, which is mostly by early enthusiasts and early adopters. And then there’s this big gap that it has to leap with, this gorge, valley of despair, before the pragmatic majority starts using it. I think across industries with AI, you’re going to see different timelines for who crosses that chasm first. Software engineering is clearly the first segment, first industry, to really cross the chasm. The dramatic majority of software engineers are all using AI. Healthcare, I think, is a great example of you will probably see certain use cases in healthcare where that need for trust, no hallucination, that perfection that Lane is referencing, you don’t need it yet, so they can start adopting it. But in some areas, it’s just going to take a little time. I think things like privacy and compliance, we’re all working on that. Our company really leans into enterprise, and so we’re working a lot on HIPAA compliance, things like that. I think that is less of an AI problem and more of just a compliance and technical—it’s not a problem. It just takes time to invest in. But I think that will get there quite quickly.
(Jim)
So Jim Greenhill, class of 1988. You touched on geopolitical risk.
(Marina)
Can you expand on that a little bit?
(Jim)
And if there is an AI arms race, who’s winning at the moment?
(Thompson)
Great question. I certainly think a lot—I spent time in China. I worked in foreign policy with regard to US-China relations, and so certainly think about this a lot. I think China was maybe the dominant civilization on earth, even though it didn’t really reach Europe in the same way that you can reach any part of the globe now. It’s the most advanced civilization on earth for a number of centuries. It then missed the Industrial Revolution, and that led to what China refers to as the century of shame or century of humiliations, depending on how you translate it. That was roughly 1830s with the Opium Wars kicking off, the First Opium War, all the way to 1949, when the CCP came in and established new China, kicked out all the foreigners. And the reason this century of shame happened is because they missed the Industrial Revolution. They missed a technological revolution. And that is a visceral feeling for Chinese leaders and many Chinese people. And so you can imagine they are not going to miss the next industrial revolution, and they’re not going to let a century of humiliation happen again to that country. And if you look back to the earliest years, even when Mao was going nuts and purging everyone, he didn’t purge the nuclear scientists. And when Deng Xiaoping came into power and reform and opening kicked off, he focused first on science and technology. And they’ve known that this is the key to their rejuvenation, both economically and in terms of security. I think when Americans think about national security, we often think about intelligence and military. We don’t think first about just economic dynamism, which is really the root of our real preeminence in the world. And so I think that’s why you see this at the core of US-China competition today. Who’s winning? Tough to say. I think there’s a few different elements to this competition. It’s not just who has the best models. America has the best models today. There are a couple of labs in China that have good models, and they are competing, especially one called DeepSeek, which is actually rooted in a hedge fund in China. But we are leading on models. But there’s also adoption, or what people now call diffusion into the economy. And there’s a cliché that America invents and China builds. They don’t need to invent the technology to dominate a technological sector. We’ve seen this in solar panels, we’ve seen this in electric vehicles, we’ve seen it in other areas. You could imagine America creating the best models and China diffusing it faster and still leading in AI and having that edge. But right now, I think it’s pretty early.
(Lane)
The other sector that I would add there really is power. Thompson mentioned at the beginning that this is in part all about scaling still, and to the extent that we are trying to build the most powerful models continually, at some point, that really does become a question of how much power can we bring online. That is a space where the competition with China is particularly intense. If you consider yourself actually converting energy into intelligence at this point, our ability to build infrastructure and to power that infrastructure depends on things as intricate as certain components in the supply chain and actually the ability to connect data centers to the grid. Those are spaces where we are going to have to move very, very fast. Real question.
(Kori)
Hello. There we go. My name is Kori Billingslea. I’m class of twenty-nine. And you guys were saying that AI is the future, and a lot more people are going to continue to use it. But I’m curious about the environmental impact of AI, because I’ve seen that a lot of the cities where AI plants are, it uses a lot of water, and those communities are starting to suffer because of it. So what can be done to reduce the environmental impact in the future if it’s going to become a part of our everyday lives?
(Lane)
I think that is a tremendously important question and the right one to be asking. I think from a technological perspective, obviously clean, firm power, getting nuclear fission and fusion as well as geothermal online as soon as we can is essential. But in the meantime, the actual data centers that we’re constructing and how we’re powering them are very real questions for local communities, different forms of cooling. Microsoft just announced a data center in Wisconsin that has a number of different components that are in some ways first in class. So actually really understanding the difference between the ways people are building becomes more and more important, and putting pressure on what’s possible is quite important. I think there will be great tension between the speed at which we need to move and our ability to actually build in a way that is healthy and sustainable.
(Julia)
Yes. Hi, I’m Julia Spicer, class of eighty-five. For context, I had about a thirty-five-year career in innovation and technology across multiple industries, healthcare, cybersecurity, blockchain, all the others. So I certainly appreciate the enthusiasm that comes from any entrepreneurial new technology. So I spent much of my career in that. But I have to step back for a second and say on a big picture, higher level, what do we do with things like who’s running the ethics on it? I know OpenAI, we all thought that that was the plan before it blew up with Sam Altman a few years ago, and that was to give some rules of the road. So we had some protections in the industry. I worked in regulated and unregulated. So I normally like the unregulated side of the industry. But because I hear different companies saying, “Well, we’re doing this and we’re doing that.” But who’s doing it at a higher level, both within the government and within the private sector, what institutions are governing or setting some governing rules around AI?
(Lane)
I can offer a few reflections there and then hand over to Thompson as well. I think these, again, are the heart of the things that we need to encounter. I think, obviously, we’ve just had a federal administration switch, which changes in some ways the way the US government is approaching these questions, the AI Safety Institute still existing, but potentially playing different roles. You’ve seen the global conferences, Bletchley, and now, I believe, in India in January, where you have world leaders coming together. But I think the geopolitical situation is making it hard for us to actually come together from a global perspective. And then the question is, when you have that geopolitical tension that requires you to move fast in the United States, how do you actually ensure that you have the regulation you need to keep things safe? I think by background, I’m not only a lawyer like these two, but also a community organizer. These are important things, and people choose their products, they choose which technology to use, et cetera. Those are important choices. I do think that many of you in this room who are scholars will likely enter the AI policy space, and it is a space that we should have our very brightest minds working on because how you actually deal with the pace at which we need to move and ensure that we are keeping concerns, everything from AI safety to privacy in mind is critically important.
(Thompson)
Before I answer, is Monty Evans here? Raise your hand, Monty. Monty’s class of twenty-two, also works for Anthropic, and works on a team that is very focused on questions like this. It’s led by this amazing woman who’s a trained philosopher from Scotland, and they work on Claude’s character and things like ethics. And so he’s a great expert to go barrage with questions after this panel. I should also call out, I think I see James Williams here. James is a China expert, worked for ByteDance, and also works in AI, and knows a ton of stuff about the US-China angle, too. So a lot of expertise in here. I would say, so back to this concept of scaling laws that we started out with, the company I worked for was basically founded because they saw scaling laws coming. They saw this explosion of intelligence, this arms race to fuel that explosion, and realized that we didn’t really understand how this technology worked. We couldn’t reliably steer it. We couldn’t reliably interpret it, both retroactively or proactively. And there were all these ethical questions that we need to grapple with as this explosion occurred. And so initially, our company was founded just to research the technology. And the way you do that is you build the best models. You can stay at that frontier testing the best AI to know what’s coming. And we often talk about this concept of race to the top, where we want to show that you can build the best technology and do it responsibly. If we show that you don’t need to cut corners and you don’t need to say, “Hey, stay out of our business,” to policymakers and others, that other companies have to show that they can do that as well. One of the areas we really leaned in there is called the responsible scaling policy. Monty and Aaron and I were talking about this at lunch. It’s called our RSP, and it’s basically self-regulation to say there are these different levels we identify, AI security levels, that when we see a model that we’re about to release is so powerful, it’s entering a new threshold of capability, that there’s all these checks we’re going to run to make sure it is responsible to release it, that it is safe to be out in the wild. And I think there are a lot of approaches you can take like that internally. Does it solve every ethical dilemma and every question with this? No, but it helps us learn as we go and show that we are asking those questions before we put this technology out into the world.
(Marina)
I’m going to take the mic back because there’s one question that a number of you suggested to me you’d like to hear the answer to, so I want to make sure we get to that. It’s very hard, even for those of us doing this full-time, to keep up with the dialogue and the conversation. There’s a lot of noise around AI. Who are the commentators, the writers, the authors that you follow and trust and think are most important to be paying attention to when it’s impossible to pay attention to everything? I have my own ideas.
(Lane)
You go first.
(Marina)
Okay. It was a loaded question. I get this question all the time. In my opinion, if you follow one person, if you listen to one person, it should be Ethan Mollick. He’s a professor at Wharton, and he’s been writing really good, easy-to-digest, easy-to-understand information about what this technology means. I think he doesn’t come at it with an agenda. He’s got a simple little book. When people say, “I just don’t even know where to start,” it’s called Co-Intelligence. I always say start there. Anyway, if you can’t follow anybody but one person, my vote is for Ethan Mollick. I think he’s—almost everything he says is, I find it to be relevant and digestible and understandable. So that’s my vote.
(Thompson)
Ethan Mollick is great, and Co-Intelligence is a great book. I say this—it’s so funny. I’ve been following this newsletter since before Anthropic was founded, but our head of policy has run this newsletter for almost a decade called Import AI. His name is Jack Clark. I think Import AI—I think he does a great job of taking very technical research and then giving the “so what.” So what does it mean if you’re in business, if you’re in education, if you’re in policy? And he puts that out every Monday morning. I think it’s a great newsletter. And then, being a China geek, there’s a podcast called ChinaTalk with Jordon Schneider, and he talks a lot about just technology. He actually goes into non-China topics all the time because it’s all relevant. I think those are two great ones.
(Lane)
I’m going to go in a slightly different direction. I often refer to my own reading as effectively my own training material. I try to read voraciously, understanding that I won’t retain everything, but that it will help me to understand trajectory and go deeper in different areas. For that reason, right now, I follow a number of energy and infrastructure newsletters, trying to make sure that I’m up to speed there. I also read the BlackRock reports, the McKinsey reports, J.P. Morgan, as they come out in the infrastructure space. On LinkedIn, I love following promising startups that I identify through VC firms because there’s a lot of energy, and it shows you where people are building, what people are excited about. So spending a lot of time there. I also have gone backwards. So with a little bit more time to breathe, I’m reading the books that I wish I could have read before, Nick Bostrom’s books, Superintelligence. I’m reading the Kissinger book that came out with Eric Schmidt in 2020 right now. I also do watch a lot of sci-fi. I did not grow up with sci-fi. These were not things that I found really compelling at the time. The currents, the things that people like Eric Schmidt are really pointing to now, you can see them earlier on in these movies. For those of you who have not been in this space for a long time, I would say, don’t be afraid to go to the places that are unusual to you because they may help you understand a bit more how people are thinking. To sum that up, I would say, do also follow the big ones, whether you like them or not, Elon Musk, Eric Schmidt, these folks. They are thinking big thoughts. They are thinking a lot about space right now, and you don’t want to miss it.
(Thompson)
I’ll just add, sometimes if you’re not going to weird places, to Lane’s point, you’re not trying hard enough. I think sci-fi helps you get to some of those weird places that are real problems that people are going to be grappling with very quickly.
(Zoe)
Yeah. Thanks so much. Hi, everyone. Zoe Walsh, class of 2014. I am actually really curious. You both mentioned optimism about science, research, health, everything that can happen there. And obviously, we know a lot about LLMs, large language models. Those seem like things that are probably going to come from this next generation of LQMs, the quantitative models that are approaching things differently. Can you talk a little bit about how you’re thinking about LQMs and how they’ll change the space if you think that they will, or if it’ll be sticking with the language side of things and building tools on that infrastructure?
(Lane)
I can give a big-picture answer and then let you fill in technically if you’d like. I do spend a lot of time thinking about what AGI means. Artificial general intelligence, in theory, is AI that can work at a PhD level or higher across disciplines. The question that you’re asking, when I think in that space, I really do try to think, what would it mean if we had physics capabilities, if we had deep understanding of the physical world? I sometimes think of them as individual models rather than as all one great big model. But I try to get there and then think backwards, because I think when you think about what companies will be disruptors versus disruptees, et cetera, understanding where people are trying to get in each of these spaces is sometimes more important than understanding where they are right now, given how quickly things are moving.
(Thompson)
I won’t claim deep expertise on LQMs. Maybe I should ask you about that after the talk. But I would say that, generally speaking, we’ve seen paradigm shifts pretty frequently in this space. Again, scaling laws, LLMs, general transformers, were all big paradigm shifts that are driving the wave of innovation explosion we’re seeing today within the last decade. And so I do think there are smarter folks than me who are thinking about what we’re anticipating, what those next paradigm shifts are going to be, and they’ll come for sure. But I think where I sit in Anthropic, we’re still very much in this LLM scaling laws world. And again, we haven’t seen that ceiling yet.
(Marina)
Okay, I think we’re at time. We’re going to have to wrap up. I’m going to ask one last quick question. What is the last cool thing you did with AI? And if you say, “Plan a trip to Italy,” it’s going to make me look bad.
(Thompson)
I’m really geeking out on this product we have right now called Claude in Chrome. So you can basically put little AI, Claude, in your browser with you and just ask it to do things in your browser for you. So I had to do this research project where I just listed out in a Google Doc all these questions I wanted to answer. And I just said, “Hey, Claude, can you just do this research for me? Can you answer these questions?” And instead of giving me a long feed where I could copy and paste stuff or click into things, it just went ahead and created a new Google Doc and did all the research for me with links and citations. And again, this is one of the least impressive things AI can do right now, but it’s pretty magical that it can just use—is that feature released?
(Marina)
Yes. It’s been released?
(Thompson)
Yeah. It’s so hard to keep up. So it can use my computer now and do things instead of me doing it on the computer. And that’s pretty fun.
(Lane)
I’m a big Perplexity user. I now use both ChatGPT and Claude regularly. And the most recent thing that I did was designing a global affairs function for a stealth startup with ChatGPT.
(Marina)
Thank you all for coming. Thank you for great questions. Thank you to our panelists. Really appreciate the conversation.
(Thompson)
Thank you.
(Lane)
Thank you.


