Understanding AI
Understanding AI article audio
Jon Askonas on AI policy in the Trump era
0:00
Current time: 0:00 / Total time: -1:05:13
-1:05:13

Jon Askonas on AI policy in the Trump era

I'm launching a new podcast! Please subscribe in your favorite podcasting app.

I’m excited to announce I’m launching a new podcast with Dean W. Ball, a research fellow at the Mercatus Center. Every week we’ll talk to leading experts about the future of AI technology and policy.

I’ll be cross-posting the first two episodes here at Understanding AI, but most of the conversations will not be cross-posted. So if you want to listen to them, you’ll need to subscribe to the podcast directly. There are two ways to do this:

  • To get episodes in your email inbox, visit www.aisummer.org and enter your email address.

  • Open your favorite podcast app and search for “AI Summer.”

Jon Askonas is an Assistant Professor of Politics at Catholic University of America. He is well connected to conservatives and Republicans in Washington DC. In this December 16 conversation, Askonas talked to me and Dean about Silicon Valley’s evolving relationship to the Republican party, who will be involved in AI policy in the second Trump Administration, and what AI policy issues are likely to be be prioritized—he predicts it won’t be existential risk.

Timothy B: Lee: So we had an election last month. And Donald Trump won the election and there's going to be a new Republican administration coming into office. And I would not say that AI policy was at the top of the Trump campaign's agenda, but certainly things are happening in the AI world.

I've been wondering, what does AI policy likely look like in the new Trump administration?

Dean Ball: Yeah, me too. I was frankly happy to see that AI wasn't a major issue in the election. Because if it were a major issue, it would probably mean it was causing problems for people.

But certainly a lot of people from the tech world and AI adjacent worlds sort of jumped onto the Trump train at various points in this election cycle, from Elon Musk to Marc Andreessen. Obviously, Peter Thiel has been on that train for many years, but others joined, to the point that there is this kind of new movement that people call the New Tech Right. And I'd like to learn a little bit more about that.

Timothy B: Lee: One of the things I think is hard about a topic like this—and actually this is true of a lot of topics—is that the things people say publicly are not necessarily the same as things people say privately.

And, often the conversations that really matter are among people who have been in a particular community—in this case, Republican or conservative politics—for a long time. And to really know what's happening in that world and what's likely to happen in the coming months, it helps to talk to somebody who is well plugged into that world and knows a lot of the players and is part of those conversations.

Dean Ball: Well, we happen to have a great guest for just that purpose. One of the most plugged in, tech-oriented conservatives I know in Washington DC and also one of the smartest, Assistant Professor of Politics at Catholic University of America, and a Senior Fellow at the Foundation for American Innovation. Jon, hello.

Jon Askonas: Thanks so much for having me.

The rising influence of the New Tech Right

Dean Ball: Anyone who knows your writing knows that you write about a huge range of different topics. I think your Substack should be called hyperdimensional if you have one. Not mine. But you've written about military affairs and technology and religion and just a huge range of different and interesting topics.

But you're also known around DC as a guy who's quite plugged in with what's happening in Republican politics. I think we sort of saw an interesting transition with this election where the sort of New Tech Right started to become a thing. You've written and talked about that a little bit.

And I'm just curious, how do you see that shaping out in the Trump administration or the sort of forming Trump transition so far?

Jon Askonas: Well, thanks for having me. I'm really excited to be on the podcast with two people I've learned a huge amount about AI from.

I think it's hugely important. Traditionally, tech policy broadly considered has not necessarily been a top priority in the Republican Party. Aspects of it have—particular special interests or industries of particular concern, but it's not been a core issue. Technology is so important to so many different areas of American life and national security right now that there's been this quite fortunate concurrence of events and interests and players that has given folks from the tech world really a huge amount of interest in the transition—again, especially in those areas that maybe were a little bit neglected in Republican politics, since at least the Reagan coalition.

Dean Ball: Yeah, I mean, what people like Marc Andreessen and others in the tech community have said—prominent figures from the tech world who notably for the first time in a lot of cases, came out in support of Trump and the Republicans in this election cycle—they said, we felt like we had a deal with the Democratic Party. And that that deal broke down.

And the deal was basically, we'll let you do what you want on social policy and economic policy. We'll fund quite a bit of your policy agenda, and in exchange leave us alone. And the leave us alone part is what sort of started to break down in the feeling of many people.

But I'm not sure the leave us alone part is going away either, when I look at the Republican Party, entirely. Like you see a lot of Republicans that are in favor of, you know, I mean, there's certainly a lot of anti big tech sentiment. There's all these kinds of different things. So break that down for me. I'm kind of interested in what are the tensions here? It seems like there's a bunch of different tech-adjacent, tech-supportive factions. And that right now we're in this nice honeymoon phase where everyone's friends. But that when we start putting pen to paper there might start to emerge conflicts. And I'm interested what do you think the vectors along which conflict might start to emerge might be?

Jon Askonas: Well I think it's helpful to understand before we get into the breakdown of the Democratic Party/Silicon Valley alliance what my friend Kevin Munger is called the Palo Alto Consensus. And I think there were sort of three big factors that drove that breakdown. The first, which no one is really talking about, is really a kind of secular macro factor, which is that, we didn't have a lot of laws in the books shaping what you built with software and on the internet. And, as this consensus emerged in the 1990s, we got the Clinton Democrats, a number of laws were passed with bipartisan support that really kind of created the internet as a free space, maybe even too free a space for all kinds of activity.

And so as long as venture wanted to build giant B2B SaaS companies or social media companies, there wasn't a lot in the way of positive regulation that could get in the way. But by, you know, let's call it the mid 2010s, a lot of the alpha has been squeezed out of B2B SaaS.

And so if you're in Venture, you're looking at other verticals other areas to make interesting deals and plays and build businesses And as the venture world awoke from the the sort of fantasy land of B2B SaaS Internet companies they saw that every single other area of life was far more regulated and far more controlled and so I think that that's not so much the Democrats turned on Silicon Valley—there’s an element of that. It's also that as Silicon Valley wanted to build different things, it had gained new interests as it were.

The second which a few people have spoken about it's a little bit of a nuance in the kind of, leave us alone thing, is that from the Clinton Democrats onward there was really a place of honor for technologists and innovators in the Democratic Party coalition.

So a lot of the rhetoric would attack big business and crony capitalism and greedy bankers and all of that. But there was this massive carve out for the innovators, the tech billionaires, they were not like the others, right, they were special.

That began to break down and a lot of the sort of populist turn on the left—there's been a populist turn on the left as well as the right—didn't see as companies as people like Jeff Bezos, Elon Musk, Mark Zuckerberg became industrial titans and not simply startup founders, the same kinds of criticisms began to apply to them. And so I think that that's a second element of this breakdown.

And the third is in the wake of 2016—and not simply the election of Donald Trump but also the Brexit vote—the idea that Internet and social media companies were a vector of implicitly progressive American power in the world. It reached its heyday in the Obama administration with the Arab Spring and Jared Cohen and all these people. That really began to break down. And, you could call it the establishment, certainly figures in Washington as well as in academia, in the media began to turn a skeptical eye towards tech and frankly to try to pressure tech companies to behave in ways that were politically advantageous to them.

And that created serious conflicts. So I think that's what's underneath this sort of breakdown of the consensus.

Timothy B: Lee: As a member of the media, I definitely see a similar kind of dynamic where, media coverage used to be much more favorable to tech companies, and then last five to ten years, it's become more hostile, and I think a lot of tech executives and people in that world feel that they have kind of an axe to grind against tech companies.

But it's just like, they're treating tech companies the way they treat, you know, oil companies and drug companies. It's like the natural way that media institutions deal with big companies is they are skeptical. They see themselves as speaking truth to power. And now that tech is power, they're treating them the same way.

And it's certainly reasonable for tech to not like that. But I don't think they're being singled out. It's just like there's a natural tension there that became more obvious as these companies got bigger.

Jon Askonas: Yeah, so to complete your question when it comes back to what is this Republican coalition look like? Well for one for one thing, capitalists entrepreneurs etc. have a natural place of honor on the right and the Republican Party. And so that's been a change of pace that many folks in tech have responded positively to.

There's still appetite for going after big tech, but what that means varies dramatically within the Republican Party. So there's pretty universal consensus that large tech companies, especially companies in the information space, so Google, Facebook, etc. should have serious limitations on their ability to censor ordinary Americans’ political expression and that any kind of partisan or biased enforcement of, you know, whatever, terms and conditions or content moderation is a kind of political threat that deserves a political remedy, right?

That's not a controversial thing to say anywhere in the Republican Party and the gentleman that has been nominated for FTC chair is gonna make that part of his sort of fundamental program. There is an appetite in certain parts of what we call the new right that's a little bit more concerned about economic concentration. Even there, though, I think there's some differences.

So there are those who want to use antitrust tools to break up Google as a form of like political punishment. And as well as, frankly, to unlock innovation. All these, you know, 10X engineers that are spending all their time getting one percent improvements on Google's back end efficiency instead of building amazing AI companies.

And then there's those who accept more of the neo-Brandeisian model. So, famously, before he was elected, then-Senator J. D. Vance was probably the most friendly person towards Lina Kahn in the entire Republican Senate caucus.

I think there are some real tensions there. I do think that the picks so far suggest a kind of moderation here. We're certainly going to move away from some of the, particularly, slowdown of M& A transactions that the Biden White House saw.

There will still be antitrust enforcement, but it's really aimed at the biggest tech companies. And most of the people in this sort of tech right coalition—maybe Elon Musk excepted—aren't from the world of big tech. They're much more from the world of venture, privately held companies. And so it's sort of worlds apart from the handful of very large companies like Facebook, Google, Apple.

Dean Ball: And I think that's born out by, if you just look at, at the broad support that you see for Trump from the tech world. For the most part, it's startup founders, venture capitalists, things like that. I think I've seen statistics on the kind of rank and file big tech employee political contributions, and that looks like Harvard University. That's 95 percent Democratic Party donations. It's somewhat ironic to me, because the people that have the most to gain from more lax antitrust enforcement are the big tech companies, you know, so I find it funny, but I think there's also kind of more cultural inertia inside of a large entity like that.

Jon Askonas: 100 percent.

So, what I kind of hear is that companies, tech started to turn its focus toward the physical world. I think the first big fight that I can think of like this is probably Uber, right? Where Uber fought a bunch of largely democratic, blue cities—New York City, San Francisco—had all these big regulatory fights, and you have things like Airbnb.

And then increasingly though, more ambitious projects in the physical world. Obviously colonizing Mars is an ambitious project in the physical world, but it turns out also that so is AI. Even though AI is fundamentally an information technology, it is reliant upon massive industrial investment in the physical world.

And so that feels like an area where everyone can get on board.

Jon Askonas: 100 percent.

How much influence will the New Tech Right have?

Dean Ball: Whether you're Microsoft or whether you're an AI startup, you want there to be more computing infrastructure and things like that. But turning to AI as an information technology, I think we've seen different takes from including even people in the tech right coalition—certainly more broadly among Republicans.

You have Elon Musk, who has expressed some skepticism about superintelligent AI or certainly a desire to take safety concerns very seriously. He supported SB 1047, the bill in California to regulate AI. Marc Andreessen on the other hand, seems to be opposed to effectively any regulation on AI as a technology.

And then I think you also have people like Josh Hawley in the Republican party who seem to be supporting quite aggressive regulation at the federal level against AI. So there's a lot of different stuff going on there. And I want to kind of like tease apart a bunch of the different ways that might shake out in terms of both domestic and, and also foreign policy.

But the first question I have for you is a little bit more basic. This New Tech Right group, they're excited right now. There's a lot of good vibes on Twitter. But it is a relatively small number of—influential—but small number of people, probably, in the grand scheme of things.

They're coming into a big party that's been around for a long time with a lot of existing power bases. Are they going to be listened to? How relevant do you think they're going to be in DC debates on tech policy? Do you think they're gonna clash with other people who have been around longer and sort of get drowned out? Or like, how do you think that's gonna work out?

Jon Askonas: It's a really good question. And in part depends on who has the ear of the president, and who has the ear of the vice president..

At a very high level, I think that they will have a lot of influence within the White House due not only to its newfound appreciation by this White House for this tech right, but also a lot of the personnel decisions that have been made.

And frankly, like, if you go back to 2016, yes, there was a lot of chaos in the overall Trump administration for a lot of reasons. But if you kind of zoom in on some of the folks working on tech, you know, folks like Michael Kratsios, Josh Steinman, they were pretty effective, and they're still around, and their ranks have only grown. So I think that there's going to be more of a through line here than maybe people appreciate.

At a very high level, anything that leaves the White House, that goes into the agencies, especially that goes into Congress, folks in tech are going to be less effective, and there's still a lot of infrastructure work to be done actually building out the whole apparatus of, building, finding friends and allies, on the right, in Washington to actually be able to do things effectively.

Now, some of that has already started, maybe behind the scenes. It had started before even the summer. And look, there's a lot of goodwill in the GOP right now. It feels like the tech right is a very needed breath of fresh air, new blood and frankly on a lot of these issues, traditional Republican lawmakers don't feel that they have a strong sense of what they ought to do. I think there's room for influence there.

But there is a need to build the underlying apparatus and I do think that folks in tech, just from public statements, are vastly underestimating the domain specificity and complexity of some of what they're doing.

Key Trump appointments for AI policy

Timothy B: Lee: So we're recording this on December 16th. Can you walk us through the major appointments that have been announced so far that are related to AI and what kind of clues that might give to Trump's thinking and to the kind of policies that the administration might be pursuing?

Jon Askonas: One of the most interesting isn’t an appointment, of course, it's the vice president. Senator Vance was pretty engaged in AI, explicitly supportive of open source AI for reasons we can talk about, was a venture capitalist himself, very connected in the tech world, was tracking the issue in a way that very few people in the Republican Party were, certainly at that level of seniority.

And he and his team are going to bring those insights and that perspective into the White House. I think some of the big appointments so far are, obviously David Sacks is kind of AI czar, although what that entails is still kind of undefined—at least publicly. Jacob Helberg is going to be in charge of the State Department's Economics Bureau, which plays a very important role in geoeconomic statecraft.

And senior appointments at Commerce and Treasury and US Trade Representative, all suggest this is going to be an admin that's very able and willing to use the full complement of tools to protect AI for the American national interest and do whatever they can to limit China's access to the bleeding edge of AI.

Even in the first Trump administration and then in the Biden administration, there's a fair amount of work done through entities like CFIUS to create export controls around the latest and greatest chips. There's been work between the White House and Congress to extend those regulations to cloud computing, to close the loophole where you couldn't use the latest GPUs, but you could buy cloud computing based on them. I expect that that's going to advance and continue under this administration. Just the other day, the Chinese announced a raft of sanctions of American companies.

And I would not be surprised if we see a kind of tit for tat supply chain warfare between the U. S. and China in the opening months of the Trump administration.

Don’t expect Trump to focus on existential risk

Timothy B: Lee: So in AI policy making, there's kind of two big threats that people are thinking about. One is China and other foreign governments getting better AI than us. And the other is, like, the AI itself causing problems.

Jon Askonas: I would add a third, and I think this is something that people who work full time in AI don't appreciate. If you asked a senior policymaker, if you said something to, like, soon to be Vice President Vance, what are the top AI threats?

He's going to say, the China threat, which has a different, a few different dimensions, right? It's what China can do with the best AIs, especially militarily, the threat in terms of, you know, as AI forms the foundation layer of the next evolution of the internet and the kind of information society, who owns the underlying models and hardware and architecture creates a lot of, what Farrell and Newman called weaponized interdependence. If Huawei owns your networks, then that creates enormous intelligence, and then, in extremis, military capabilities. That would be one.

Two, would not be existential risk. It would be something around AI and censorship, and the kind of political implications of AI. Especially in a global context. Outside the United States, we're seeing so-called developed countries crack down on free speech—I think lay the groundwork for AI-based content moderation. In some ways this is baked into the technology itself. That would probably be two.

The third would probably be the economic implications of AI. There's a lot of things that depends on in terms of impacts of AI and robotics on industrial processes. And then maybe there'd be some other ones before you got to “AI safety.”

There are obviously people who are deeply concerned about AI safety. They tend to be people who are not from Washington, are not necessarily politically influential, certainly not policy makers themselves.

So I think that some—we could talk about how duplicitously it was done—survey research notwithstanding, certainly in conversations around Washington, if you bring up existential risk as a reason to do something, it's not very compelling. And you know, last year I was in a—I think I can talk about this now—meeting alongside some work the State Department was doing, for what came out of the Gladstone Report, I can't talk about who was there and what was said precisely, but I think I can say that I was pleasantly surprised that relatively senior levels of the civil service bureaucracy working on AI policy were far more skeptical of AI safety reasons and policies than some of the people who are “technical experts” on it. So I think that there's a bit of an assessment gap here from folks working in AI about how important AI safety is, vis a vis what's actually going on in policymaking.

Timothy B: Lee: So I'm very sympathetic to that view. I'm just thinking, I mean, Elon Musk seems like he's gonna be one of the most prominent—it’s not clear what his role is—but certainly somebody in Trump's orbit. And he, as Dean said, has doomer kind of sympathies.

And Tucker Carlson has said some fairly apocalyptic things, which makes me think there's some level of grassroots interest in this. So is there not a faction of the right that really is on board with these existential risks? Do you think they are on board but they don't support regulation? Tell me what that side of the coalition is like.

Jon Askonas: Great question. Two things I'll say. One, it's not clear how much of Elon Musk's positions on AI are driven by his P(Doom) vs. particular judgements about the personalities involved and his history with them. You can look at his support for SB 1047 as a principled stance for AI safety, or you could view it as a savvy political tactic to hamper his California-based competitors while he builds xAI in Texas, you know?

Timothy B: Lee: And we know when he started opening AI, he talked a lot about wanting to stop Google from getting control of AI. I'm certainly not going to disagree that Elon Musk is sometimes a bit cynical in the way he frames things.

Jon Askonas: To the bigger question, I would say just across America politics—I don't think it’s necessarily that right wing—there is a lot of latent potential for apocalyptic fears about AI. And you see some of that in rhetoric of people like Tucker Carlson. Before we talked, Dean, you mentioned some other religious figures on the right who are quite concerned about this.

Whenever folks from the AI safety community lean back on kind of the Terminator, Skynet scenario, whatever, there's a kind of language of AI apocalypse that is quite deep in our popular culture.There's some interesting political science evidence from the Cold War that shows that these kind of pop culture narratives can have actual policymaking implications.

Where I think the issue is, is there's a breakdown between the way that folks working in AI safety see the problems that's playing out and the kinds of policy levers that are available. Especially anything around an AI pause or creating regulations that will limit work above a certain FLOPS threshold or whatever.

The counter argument of national security is just way stronger. So, I don't think those measures have been or will be successful. Unless and until, in some cases, you have folks who are tying those to the threat of China. And then I think they're going to be more successful.

What's possible to happen is that there will be some event that will activate some of those apocalyptic fears and then could change the politics around AI quite quickly. Now it's not going to be rapid takeoff, AGI, FOOM. It's much more likely to be, something that is sort of an industrial accident or frankly a lot of the really kind of heartrending stories. I don't think people in the AI policy community are talking enough about some of the character.AI things that have happened, which are being enormously influential at the state level.

Republicans could prioritize protection for children

Jon Askonas: So the vast majority of AI bills at the state level are not motivated by X-risk AI safety. They're motivated by the same kinds of concerns people had about big tech, about protecting kids from AI. Unfortunately, a young man killed himself after long conversations with a Character.AI system which did not tell him—did not encourage him in any way—to kill himself, but also didn't catch obvious signs of despair.

I'm reminded of a very sad story from around 2008 where a young mom had gotten so addicted to, I think it was Farmville or something, that she neglected her infant and the infant died. So there is something about the sort of fascination with new technology and bad things that happen that is a kind of moral panic that's not the first time it's happened.

But those are the kinds of things that are much more influential, I think, at the state level than X-risk or high level AI safety arguments.

Dean Ball: Yeah, I've tried to emphasize that myself in my own writing. The character AI thing in particular, because I think a lot of AI policy people are, you know, very rationalist and, and sort of, logical.

And so they looked at the New York Times coverage of that story and they said, well, it doesn't look like the model actually played a role. So I guess everything's okay. Without realizing that the story itself—whether or not it's a good lawsuit is like an entirely irrelevant question as to whether or not it is a horrific set of facts that will motivate things.

Jon Askonas: Yes.

Dean Ball: In the last five or six years, kids’ online safety has become one of those issues for a lot of Republicans that's like civil rights for Democrats. If there's a civil rights bill, Democrats are going to support it. It doesn't really matter what it says, Democrats are going to be like, I have to support this. Religious liberty was one of those issues for Republicans for a very long time, probably still is. But it feels to me like an emerging, sort of other knee jerk one is gonna be kids’ online safety.

And also the state bills that you're talking about are quite orthogonal, that Republicans are backing in some cases, are oftentimes really quite unrelated to kids online safety. It's not like we're gonna have age limits for kids on AI platforms. It's not like that at all. It's like completely unrelated stuff. But they're using the kids’ online safety angle because I think it's like a very effective one.

Jon Askonas: Yeah, and I think this is, so you mentioned briefly earlier Josh Hawley's position, and I haven't looked deeply about his case in particular, but again, I think something that folks in coming to AI policy from the world of building AI systems don't appreciate is that the number one motivator for state and federal lawmakers on the right around AI is basically remorse or regret for the social media era and wanting to make up for lost time, so to speak.

And framing AI through the lens of social media, especially around political censorship and kids’ safety.

Two philosophies for competing with China

Dean Ball: So earlier you were talking about some of the different concerns that Republicans might have about AI and you listed China as being the top one and then two sub-concerns that might exist under that.

One is what nefarious things might the Chinese do to us or our allies using AI? And then the second is, if AI is going to become this kind of infrastructure—the way the whole economy works all across the world, then it matters very much who owns that infrastructure.

Certainly it matters who built it, since an AI system is fundamentally a values laden proposition. The language model, at the very least, is. So those two things are in tension.

Jon Askonas: 100 percent.

Dean Ball: There's a quite interesting deep tension there, and you see it in different aspects of the Republican Party.

You mentioned J. D. Vance and his support for open source AI, but you also have both elected Republicans and also, people on the New Tech Right talking about how, uh, you know, open source AI means that we, and really by we, you could really just say Meta, is giving our model weights, our precious model weights, to China.

Jon Askonas: Yes.

Dean Ball: And there's a lot of people who believe, you know, I hear this all the time when I'm on Capitol Hill. I hear “the only reason the Chinese are so close to us in AI is because of Meta giving away our models and open source.” So there's a desire to limit. Open source in academia is often referred to as a spectrum, but it's like not. It's actually one of those things in life that is actually not more complicated than simply being a binary. It's like either you can release the model weights or you can't.

Jon Askonas: Yes.

Dean Ball: And so like that's going to be a stark policy decision. There are different voices in the Republican Party on that issue. I'm not asking you to make predictions, but I am kind of interested just, like, how should we be thinking about that?

Jon Askonas: Well, I think you're right. There is a very direct tension between first and second objectives. And you see this with GPUs, you see this with open source models.

If your primary concern is China's capability for building AI and using it in any way, shape or form then there's no real downside to restricting models GPUs, etcetera. Right?

Frankly, that argument has carried the days so far because it takes the form like even if there's a one percent chance that the next version of Llama helps the Chinese, is it worth taking it.

“Sanctions on Chinese AI is industrial policy for Chinese AI”

The other piece of the picture is that we're in a global competition with China—including over who owns the infrastructure layer. So for example like yes If you restrict the latest generation of NVIDIA GPUs from going to China, that slows down Chinese AI production. But they're not gonna sit on their fingers. What are they gonna do? Well, they're going to respond the way they have responded, which is, they're going to get better at basically building, training systems that use heterogeneous hardware.

They're going to invest in being able to build GPUs better themselves and improve the performance. And they're going to, by themselves, create a giant domestic market in China for these GPUs. So your Chinese companies that otherwise would prefer to buy NVIDIA GPUs are going to buy, the Chinese model because they have no access to NVIDIA GPUs.

In some ways sanctions on Chinese AI is industrial policy for Chinese AI, viewed from a certain perspective. It's the same thing with open source. Like, yes, you're restricting China from having it, but you're also restricting anyone else from having it.

And that means when China builds and develops their own open source models, which they're doing, in part because they make money on the hardware side, those are going to become the global defaults for open source models. And since there's so many important use cases for open source models, you're handing that whole sector over to the Chinese.

So what I hope to see—and what I’ll certainly been making the case for over the next four years—is a little bit more balance and nuance between these two perspectives. I think so far, the kind of stark “we can't let them have our precious GPUs” argument has carried the day, but I think ultimately it's short sighted.

And unfortunately, American sanctions policy for the last 50 years has often been short sighted and counterproductive. So in some ways it's in keeping with the trend.

Dean Ball: It's funny too because I think the conventional wisdom among commentators on what Trump's policy is going to be is, well, the export controls are, you know, they're going to get ratcheted up. They're definitely here to stay. It's going to get more because Trump is anti-China.

But if you actually look at what Trump himself has said on this topic in the recent past, he was interviewed by Bloomberg in July, right after the first assassination attempt. And he was asked about export controls and he was like, ah, export controls aren't really my style. I don't see why we would limit the market of our own companies. I'm more of a tariff guy.

Jon Askonas: It’s the most beautiful word in the English language.

Dean Ball: I guess we'd put tariffs on Chinese GPUs or something. There's definitely some ways in which he cuts against the Washington consensus, even the Washington consensus on issues that he fundamentally, I mean, our posture towards China is fundamentally because of Donald Trump's presidency in 2016 and a Washington consensus has emerged on that that he himself might contradict again.

He's expressed willingness to have BYD sell their cars in the United States if they're made here. You didn't have the Biden administration saying that, you know what I mean?

So I think people are not prepared, necessarily, for how—I don't even want to say it's loose necessarily—but how I think Trump personally might think about what is in our interest and what is not.

Toward the end of this last week, you saw the Biden administration come out with quotas for countries that we are friendly with, but maybe skeptical of in some way—Gulf States and places like that. I mean, how does that make them feel, right?

We have quotas on them for what they can get when, in fact, they already have quotas, right? Jensen Huang sets the quotas. So I think it's kind of an interesting dynamic.

But the question that I have looking at it from the fundamentals of AI as a technology is—and this is an area where Tim and I are different from one another, in some ways—I'm pretty bullish on the near term capabilities trajectory of AI. I think we will see AI models that can legitimately advance the frontier of science. Maybe not on their own. It might not be like you can go up to a language model and say, “Find a new discovery in chemistry.”

It's probably not going to be that easy. But there will be things like that. And I think you'll just see increasing amounts of real economic value. Right now, there isn't really any. There's just imagined economic value.

Jon Askonas: Oh I disagree with that.

Dean Ball: Well, I'm talking about with the language models.

Jon Askonas: Oh, I disagree with the language model too.

Dean Ball: Really?

Jon Askonas: Yeah, I think I think there's massive amounts of economic value. I think it's mostly not internalized by the firm level, which is why it's not showing up as…

Dean Ball: Well, there's consumer surplus. There's absolutely consumer surplus. But what I mean is like OpenAI is not profitable. The firms are not profitable and like you're not seeing Pfizer be like, “Oh, we couldn't possibly have made this drug without GPT-5” or something like that. But I think you will start to see that. That will start to be things that you see.

So, as that happens, it sounds like you're kind of saying that you think Republicans are going to remain, on net, pretty amenable to open source AI and things like that, at least for the short term. But do you think that that will change as AI just becomes more capable?

Jon Askonas: Well, I was making the case that they should be amenable to open source. I think that that's a really complex question.

There's almost a paradox here, which is that the closer the AI system is to, as you put it, like “gimme a new discovery in chemistry,” the harder it is to make the case for open source, I think, because of the potential advantages for America's allies—and for groups like non state actor terrorist organizations.

The fact that the latest generation of LLMs are really interesting, fun to use, generate a lot of consumer surplus with doing things you couldn’t ordinarily do but even in laboratory conditions haven't really pushed the boundaries of enabling you to build bioweapons—it's the “slowness” relative to what people were saying a couple years ago of AI reaching those kind of benchmarks that has enabled it to remain open source even to date. So if there's a GPT 5 breakthrough like the one you're suggesting, or more specifically if Facebook, if Meta is going to release a Llama with the GPT-5-like capability, that is where you might begin to see a real internal battle about open source.

Don’t expect Trump to save TikTok

Timothy B: Lee: So on the question of control over infrastructure, I wouldn't say this is mainly an AI question, but TikTok is scheduled to be banned in the United States on January 19th.

Jon Askonas: It's required that it be sold, and their unwillingness to sell it—

Timothy B: Lee: Yes, but the the penalty kicks in on January 19th. Yes, exactly. Donald Trump has had an interesting trajectory of positions on this. I mean, in 2020, he was threatening to do this himself, and then more recently, he said that he's actually opposed to this.

Some skeptical people have pointed out that he had a meeting with Jeff Yass, one of TikTok's biggest investors, shortly before he had this change of heart.

How does that fit into your thinking about this, and do you have any kind of expectations for, is the rest of the Republican Party going to go along with Trump's apparent new view that we shouldn't ban TikTok? Do you see Trump trying to repeal that legislation or somehow prevent it from being enforced?

How do you see that playing out?

Jon Askonas: I have no insight into his true beliefs about this. Obviously, Jeff Yass has been a very important supporter of his for a long time. In addition, there's something savvy about, fulminating against something over which you have no power or authority.

The fact that it's written in statute, and the deadline was very intentionally set before the next inauguration, by people who are working on it, they worked very hard to try to make that possible, means I don't know what he can do about it. I expect in one way, shape or form, it will take place.

How many other companies are going to fall under this legislation? It's not really apparent as yet. There are efforts afoot to go after other similarly influential Chinese startups, which I expect the President will support.

I do think that the legislation has made it a lot easier. One of the problems in the first admin was, even when they wanted to ban TikTok, they didn't have a lot of political cover for it necessarily, and some of the mechanisms were not great.

The mechanisms here lean towards, being able to preemptively, require these companies to have US ownership. That being said, TikTok is almost a one-off case. There's only a handful of other Chinese companies that are nearly as influential as TikTok.

So I don't know how much we can learn from it.

Dean Ball: And TikTok also comes from an era when the Chinese invested more in things like that. And that kind of stuff includes AI too, interestingly. Information technologies are disfavored these days by the central party in China.

So I think, there are a few Chinese cultural hits from a certain era that made it big in the West. I don't know that we're going to see that many more of those. The question of Chinese social media and other kinds of consumer software services in the global South is kind of interesting, because that is an area where oftentimes Western companies are getting their lunch eaten.

A small-government approach to AI safety?

Dean Ball: I want to swing back to something you said that I was intrigued by on AI safety. And how [among] Republicans, it's not that compelling of a cause anymore in D. C. And I think that's true in a bipartisan way—maybe at the margin less so for Democrats—but in general I think DC has become more skeptical of AI existential risk. And then it's kind of doubled for the Republicans because in their view—and I think accurately, by the way—the term AI safety was overloaded by the Biden administration. It was made into quite a capacious concept, incorporating anti discrimination and what Republicans would see as kind of like woke or DEI kind of principles.

And again, I think they're not wrong about that. If you look at Biden administration policy documents, I don't see how you can say they're wrong about that, but, I have a theory that I want to run past you, which is there, you can certainly make the argument that like ensuring control of AI systems and understanding how they work—set that aside from existential risk and just say, we need to really understand how these things work and we don't currently and that's like a legitimate field of scientific inquiry.

Is it possible to sort of isolate that field of AI safety or call it AI security or something like that?

There's a lot of talk in the AI world about rebranding because that's actually what the existential risk people care about, mostly, right? Control of the systems not in the sense of governmental control, but like user control—I can control.

Will the Republicans be willing to create a more circumscribed definition of AI safety and let the scientists and engineers cook on that problem, perhaps with public support of some kind.

Jon Askonas: It's a great question. I think there's two components to it. The first is that one relatively persuasive claim to Republican policy makers on AI safety has been whenever you want to use AI to do something in the real world, a real world liability framework takes over, right? It's not like you can just get away with saying, “Oh, my AI system, you know, ran your daughter over, so nobody's responsible, right?”

There are real world liability frameworks that apply. And real world regulatory frameworks as well that the systems have to match. And so the push, from those of us opposed to AI safety has very much been let's focus on domain specific regulation. That being said, there is the broader meta question of how these systems function in general and some of the unexpected ways they might behave and especially as we get better—a lot of the events in the last few years have been in the world of LLMs. A very novel, very interesting, development that was quite unexpected, I think, for most people. We are beginning to kind of marry those LLM based innovations to some of the broader neural network and deep learning work that was going on a decade ago. And I think that's gonna be critical from locking really agentified AI systems.

And so I think we do have a lot to learn at a technical level about these. So you know, within the world of AI safety, when people talked about controlling AI, they tended to mean one of two things, which actually have vastly different implications.

The first is they meant we want to build AI systems that do what we tell them to do, and that behave in ways that are not negatively surprising.

If they positively surprise us, that's good. If they surprise us with a bad outcome, that's bad. Although those might be hard to disentangle. So there's a whole world of technical research here.

Then there's a world where controlling AI means controlling AI as a technology. And controlling AI as a technology is not a technical problem of AI systems. It is a problem of social, political, and cultural engineering to produce a governance framework that can control the development of this new technology. And some of the people, Nick Bostrom, Dan Hendrycks, et cetera, were quite explicit about this. And that, frankly, is terrifying.

And everywhere where that kind of framework has attempted to be created—this is not the first rodeo, people in AI safety might be shocked to hear—really bad things have happened. At a minimum from destroying the underlying technology to broader forms of totalitarian social control.

Timothy B: Lee: Give us an example of that.

Jon Askonas: I'll be accused of catastrophizing, but if you go back and look at the people who are talking about technology out of control in the 19th century, they're talking about the Industrial Revolution, its dramatic social and political implications. And the solution was centralized planning in the command economy. The “social question” of the 19th century is, how do you build political and social control over the economic dynamism and inequality released by new technologies?

That leads to totalitarian communism—did lead, in multiple instances, multiple flavors. In a slightly less apocalyptic vein, the Nuclear Regulatory Commission and the whole framework on nuclear energy is created to “control” nuclear technology.

There are legitimate problems with nuclear control. There's radiological poisoning, which was quite novel at the time, and radioactive materials. There's concerns about weapons proliferation. But we built a framework that destroyed nuclear energy in the name of saving it. I think we could come up with a few other examples.

Timothy B: Lee: The example that comes to mind for me, which I don't think was as catastrophic, but it was not really great was the regulation of cryptography in the 90s. Where in the 80s, the hardware you needed was expensive enough that it was basically only the military used it, and so we had this military framework.

And there was a period in the 90s where it was, like, illegal for foreigners to download the latest Netscape browser with good encryption in it. Because that was, like, military grade encryption. And eventually people realized that was silly and they repealed it, but, if you have a rule that says like every instance of such some technology needs to go through some regulatory process And it's a fast moving technology, best case, you're gonna slow that technology down and have a lot of pointless bureaucracy and in the worst case you end up killing it altogether.

Jon Askonas: So to answer your question, I think there is a lot of scope for technical AI safety research. I think this is an area where we're NIST where NSF could play a very positive role. I think rebranding it and kind of rescuing it from some of the more political versions of AI safety is an important task.

AI safety advocates got ahead of their skis

Dean Ball: Yeah. it seems to me that, the AI safety world made a fundamental miscalculation about their level of leverage when the AI policy conversations first started to happen in 2022, post ChatGPT. I remember that Gladstone report you mentioned too, and I remember banning open source models—models that are orders of magnitude smaller than open source models that we have today—a licensing regime and making it a felony in fact to publish an open source model.

Timothy B: Lee: Don't forget bombing data centers.

Dean Ball: Well, there was also bombing data centers, right? From Eliezer Yudkowsky, kind of the doyen of all these folks. But, in fairness to them, I think, this is not a community that has advocated for anything in the policy world before, and they're like, engineers, scientists, rationalists, that are coming at this stuff. And so I think they're quite willing to recalibrate, and quite willing to say we went way too far on some of that stuff. And also, I think, you know, Twitter has the tendency to promote people's extremes.

Timothy B: Lee: There really was a sense in 2022 that this was their first rodeo. Like, these were all people who had not been in any of these debates before and did not necessarily have good advice about how to contribute constructively.

And so you'd say, okay, some of those theoretical arguments seem plausible, but like, what's the bill you want to pass or the regulation or whatever. They just didn't know like what the PDF should look like to even get a conversation started.

Dean Ball: It's what got me into AI policy, in fact. I was doing something completely different two years ago, but what got me into it, personally, was like, wow, it seems like there's a lot of stuff on the table that would involve enormous amounts of government control. And I feel like for the most part, that has gone away, or at least been lessened.

But, there was still this kind of feeling that I had that regulation has sapped our ability to innovate in the physical world in so many different ways. And I had this kind of impressionistic feeling two years ago that the same thing was coming now for the digital world, and that AI was going to be the banner under which that happened.

And frankly, I see that happening still.

Jon Askonas: Right, and I do think that we've cleared at least one very important hurdle on the political threat of AI safety, but I think it was a closer call than people realize.

Dean Ball: Yeah.

Jon Askonas: The Biden Administration implemented a very aggressive executive order that created a framework. It did some good things. It would have created a framework for AI adoption in the agency level in the executive branch, which is good. But it also really centralized control of AI policy, created a framework for AI regulation at the model level. It created the language and the bureaucratic players like the actual authorizations of new personnel hires in AI across the executive branch to centralize dissemination of AI safety to include anti-discrimination, to include misinformation, disinformation from the White House across the government and through NIST across the private sector.

Recently I think it was on a conversation with Bari Weiss, Marc Andreessen described a meeting in May of like, this year, where the Biden administration told him, like, you shouldn't invest in open sources, it's pointless, it's going to be closed, it's going to be a small number of companies that are allowed to build these systems, and it's going to be public-private partnership, which is another way of saying centralized governmental control, and that's going to be that.

And that was one of the things that led Marc and Ben to come out and endorse President Trump. Some of the things that were being discussed even a year ago are almost shocking to believe. I mean, if you project out the trajectory of the way AI safety was being implemented and some of the constellations of interest in the Democratic Party—which may still be there—we were looking at things like, carbon emissions based thresholds for model size and for compute usage.

Looking at things like misinformation, disinformation being baked in at the content layer in social media, which, by the way, they're doing in Australia, they're going to be doing in Europe, they're going to do possibly in the UK.

Dean Ball: In some American state governments, too.

Jon Askonas: In American state governments, right. So, you know, I think the United States has a very unique constitutional framework. And then we've had this massive political change, which is going to really fundamentally alter the politics around AI, we got very close to them succeeding at building this kind of carapace of social control.

The “AGI Manhattan Project”

Dean Ball: That's right. I'd like to ask you also, since you just mentioned the public-private partnership thing: there is talk about town, about an AGI Manhattan Project. It's the US China commission that put it in as the first recommendation, in fact, in their much watched annual report.

What do you think about that personally?

Jon Askonas: My colleague Sam Hammond, who on the one hand, has been pushing for an AI Manhattan Project. He's also written somewhere else to the effect that, The decision to pursue AGI is always and everywhere an essentially religious or philosophical choice.

There's nothing that you want to get an AI system to do that you need “AGI” for. It's a real Pandora's box of what you mean by a Manhattan Project. Do you just mean substantial public investment in AI? That's great. I mean, we seem to have a lot of private investment, but a little more public investment can't hurt. I can imagine some good things you want the federal government to do, like create a large model sandbox or compute environment for academic researchers, others to build models, test models, whatever. You can think of good things.

Do you mean a top secret project where we basically kidnap the world's best AI scientists and lock them away to work on AGI? And classify anything that is anywhere remotely near touching building AGI? That seems like a problem. I don't know if that's going to do what we want it to do.

When people say Manhattan Project today, they mean “really successful bleeding edge science project.” They don't necessarily think about or mean all of the organizational, institutional implications of the Manhattan Project, some of which we are still living with. We are still living with the fact that early on in the Manhattan Project, like we created security classification in the United States, basically for the Manhattan Project.

The notion of born secret classified material comes from the Manhattan Project. So, yeah I don't know what people mean when they say, I want an AI Manhattan Project.

Dean Ball: Yeah, I agree completely. It doesn't make a ton of sense to me in particular because it seems like using frontier AI systems is actually going to become harder over time.

What I mean by that is not that it will be more difficult to use, but that actually figuring out what are the limits of these things capabilities is just going to require the world's foremost experts in every niche topic to ask questions and play around with it. There's just no way you can do that from a centralized—

Jon Askonas: I think really fundamental to my view of AI and AGI is, maybe you call it a kind of materialist or physicalist view. I think the material world contains kinds of complexity and nuance that are many orders of magnitude greater than any you can encounter on a digital system. And so what you often have people extrapolating forward AI capabilities based on what they can do within a purely digital context of the free flow of information and then extrapolating to what it can do in the physical world as a result.

So if I'm not mistaken, one of the manifold betting market AI predictions is a definition of AGI that I think is like, can assemble a certain model of Ferrari. And I think it'll never happen because you don't assemble a Ferrari. They're handbuilt. There's elements of how they're tuned that come down to the touch feel to an expert auto mechanic of what a Ferrari should be. There’s not a rule book for it. It's quite different in that regard from building a Ford Transit or whatever. But the reason I bring this up is you think about what kinds of AI is going to be militarily decisive? I don't think we're going to see a leapfrog to like AGI. Like our problems with automation in the military are way more foundational than that. I mean, there are parts of the military that are running Windows OSes from 20 years ago. The problems we have are so much more basic and fundamental than what we could even implement with AGI.

Dean Ball: Yeah, 100%. And people will often say it's a drop in knowledge worker. And even there, my response is always if it's 85 percent of a drop in knowledge worker, that would be a ton of automation. Also, a profoundly different thing from 100 percent of a drop-in knowledge worker.

Like, 99 percent is really different from 100 percent, right?

As Tim has written about quite a lot for self driving cars, there's a galaxy of difference between 99–

Jon Askonas: But also, tell me you never worked with your median knowledge worker without telling me you’ve never worked with the median knowledge worker. The US Army bureaucracy has expanded dramatically over the last 40 years. Do you think it's gotten faster or slower as a result of adding more personnel—more knowledge workers?

Dean Ball: 100 percent. Well we’re wrapping up here. I want to ask you: You write a lot about theology, religious issues, and obviously there is a deep religious component to AI development for a lot of the people who are building it, who believe that they are summoning the machine God. How do you think about that from a kind of religious perspective?

Jon Askonas: Yeah, so it's interesting. Amongst serious Christians there is a certain amount of skepticism or concern about AI. And I think if you really had to boil it down, it's less about the technology itself, as amazing as it is. It's more about the way that people building it talk about it. When somebody like Blake Lemoine says, when I was working on Lambda, we saw ourselves as performing a mystical ritual. Or when you hear people talk like Ray Kurzweil who was asked “do you believe in God” and his answer was “not yet.”

There's a kind of like you said there's an explicit attempt to kind of summon the machine God that most people in the AI space who are not at least don't feel that way just ignore or dismiss it out hand and I think religious people actually take that seriously in terms of what is being intended by this.

It's not so much the technology as the view of the human person that that reflects. One reason why I think the rationalists have been so concerned about AGI is that their view of what makes human beings who they are is basically their rationality.

So if you're defined by your rationality, and if the more rational you are, the more human you are, then a machine that is much more rational than you is by definition a superhuman machine.

I think there's a concern that if we have a society that already has this mechanistic view of the human mind, and you have a much better mechanism, then that society will no longer have any use for most people. And this is, I think, a huge contrast with most religions’ perspectives on who humans are, and certainly Christianity's perspective on humans as bearers of the God image.

I do think there is a fair amount of skepticism. I think a lot of it is kind of the willies over how AI is talked about. Some of it is also a fear on a spectrum of reasonableness about who or what is trying to speak to us through these AI systems.

I mean, at minimum, I think it's almost incontrovertible that AI presents the biggest temptation to idolatry of any technology humans have ever made.

If idolatry is about casting our own agency and will into an object that we listen to, we used to do that with little idols made out of wood or stone. And now we have something that really does speak to us, as it were. I think that's a huge temptation to idolatry.

Look how much people are invested in things like astrology. Qhich is a quite rudimentary— discipline—let’s put it that way. The kinds of apparent self knowledge available from AI systems that have access to an enormous amount of information is enough to be extremely persuasive to your average person.

And so I think the sort of substitution of serious spiritual introspection for a kind of cheap trick of an AI system is something that religious people are legitimately concerned about.

Dean Ball: Makes sense. Well, John, thank you very much for your time. Thank you so much Dean and Tim. Wonderful, chatting with you.

Timothy B: Lee: So Dean, I thought that was a really fascinating conversation.

And one of the things I found striking is that I think when people think about AI policy, often, existential risk concerns are the thing that comes up the most. I mean, it has this kind of theatrical quality that a lot of people are very concerned about. And I thought it was interesting that Jon just didn't see that as a top tier issue among Republicans.

He said there were several other things that Republican policymakers were likely to be concerned about. I don't think he made a super clear prediction about where that's going to end up. But it sounds like just other stuff will be on the agenda, which personally is fine with me cause I'm not that concerned about X-risk.

But it seems like it might be a shift in emphasis from the Democrats where I think, the folks who were very concerned about X-risk maybe had a little bit more of the ear of the president.

Dean Ball: Yeah. And I think that to some extent, it almost doesn't matter the political party, whichever party was in power, especially in the White House when ChatGPT came out, when AI came to the top of the public conversation, that's when I think the existential risk community had the biggest megaphone, and the largest amount of influence. And I think they probably would have had that influence over a Republican administration just as much as a Democratic one.

But I think now that things have kind of stabilized a little bit, I actually think both parties have started to kind of look askance at the existential risk problems and rate them less seriously than perhaps things like competition with China or near-term concerns like, for conservatives certainly, kids safety online.

And so I think it will not be surprising, just like Jon said, to see that continue to be a larger focus than the existential risk stuff, which maybe dominates the conversation online. The other thing that I found very interesting is that I think Jon sees a currently unresolved and perhaps in some ways irreconcilable tension between the open source enthusiasts who supported Trump like Marc Andreessen and maybe more of the security minded China hawks. Everyone agrees that competition with China is important, but the fundamental question is, is that competition more about security or is it more about economic competition?

If it's security, then the open weight models might be going away soon. If it's about economic competition, then I think the open weight models might well have a big role to play. And I think that that is just a fight that's going on in DC right now that we'll have to see how it plays out.

Discussion about this episode