25 Comments
User's avatar
User's avatar
Comment deleted
Dec 31
Comment deleted
Timothy B. Lee's avatar

I agree that Sora does not seem very valuable. But I think generative AI already generates a lot of value. The two big use cases I see right now are coding agents and "deep research." I use ChatGPT for the latter on an almost daily basis and it saves me a ton of time. Programmers are getting a ton of value out of tools like Codex and Claude Code.

And I think that's just the beginning. My wife is a doctor, and her hospital is just starting to experiment with transcription tools that listen in on a patient visit and then generate the first version of the doctor's notes. The doctor still reviews and approves the final version, but tools like these should allow doctors to take more thorough notes while paying closer attention to patients and saving a lot of time. In the long run, these products should be able to save doctors several hours per week. When you multiply that by hundreds of thousands of doctors, that's easily a multibillion dollar business. And you can tell similar stories for lots of other white-collar professions — lawyers, accountants, medical researchers, scientists, teachers, etc.

Parlance's avatar

When do you expect OpenAI to become profitable? My understanding is that it is not, and has no real prospects of being so without a truly massive jump in revenue

Timothy B. Lee's avatar

I think this is a hard question to answer because it's largely up to OpenAI. The situation isn't that OpenAI can't figure out how to make its models profitable. Rather, they are choosing to forgo short-term profits in favor of faster growth. So to a first approximation, they'll become profitable when they decide to — either because they don't feel there's room for a lot more growth or because the fundraising environment gets more difficult and they can't get the capital they need to fund fast growth.

This isn't an unusual situation. Amazon lost money for years in the late 1990s before turning a profit in the early 2000s. Uber lost money for a decade in the 2010s before turning a profit in the post-COVID years. I think the market for OpenAI's products is going to be massive, so I don't think there's much doubt they'll be able to pull of a similar pivot when the times comes. I don't know when that will happen, but reporting suggests OpenAI's own projections show it happening around 2030 — so that's my best guess on the timing.

disinterested's avatar

But if the open source models get good enough (and they basically are for most people), there will be lots of competitors who can easily undercut OpenAI on price. It’ll be a race to the bottom. These things are basically commodities already.

Timothy B. Lee's avatar

All startups face some risk that they'll invent a valuable product that has no moat, but the risk doesn't strike me as all that high with OpenAI. Among the possible moats: (1) the strength of the brand ChatGPT and the habits of consumers, (2) a large userbase gives OpenAI access to unique training data, (3) as model training gets more expensive, it becomes infeasible to produce open versions of frontier models, (4) OpenAI develops a two-sided market for AI ads analogous to Google's AdWords, (5) OpenAI develops exclusive licensing agreements with sources of training data or inference-time data sources, (6) the engineering work required to create a user-friendly chatbot grows out of reach of startups. I'm not saying that any of these moats will definitely prove important for OpenAI, but it seems likely to me that some of these — and perhaps some others I haven't thought of — will help OpenAI stay at the forefront.

But it might not! Predicting the future is hard, and investing in startups is risky.

Tomas's avatar

Doubt on N10. Tesla will be the 1st IMO.

Charlie Guo's avatar

Thanks for having me, Tim! I agree that the context windows of frontier models will stay around one million tokens, but for slightly different reasons - it's becoming more effective to invest in managing contexts that hit one million tokens. See Claude Code's auto-compaction tools and OpenAI's /compact API endpoint.

Timothy B. Lee's avatar

Yes! And obviously the causation runs in both directions. Poor long-context performance creates demand for context management tools, while the existence of context management tools makes it less urgent to improve long-context performance.

Florian Brand's avatar

What a year it has been -- and there are no signs of the next year being any slowdown, either. Lets see what the year will bring, happy new year and thanks a lot for having me! :)

Leo C's avatar

Huge fan of Nathan Lambert and I’m beginning to work through his RL book! Didn’t know Interconnects has other team members! So happy to see it grow!

Wolff Norbert's avatar

MCPs are not technically relevant, but rather serve as a link to technical standards and services while remaining independent of language models, i.e., as a communication standard. At least in Europe, this is to protect certain industries and access to the customer interface. Regulation protects consumers from monopolies and trusts. Standards are the main weapon.

Brennan McDonald's avatar

Interesting predictions from everyone, nice article! I definitely think MCP will drop out of popularity in 2026, am sceptical that legal/political consequences of the AI boom will be material for the big players, even $1.5 billion for Anthropic is trivial relative to the economic opportunity they're pursuing (see: big banking and big tech regulatory settlements and fines)

Jouni Heikniemi's avatar

Do you see a discrepancy in stating that Corporate America is super-hot on AI services (#1), and then acknowledging that OpenAI and Anthropic are aiming for a measly 45 billion in revenue, combined (#2)?

There are obviously other players (Google, Microsoft, Meta etc.) on the field as well, but to me, the revenue predictions for 2026 don’t describe a hockey stick curve on a sizzling market yet.

Timothy B. Lee's avatar

I don’t know, doubling revenue from $15 to $30 billion seems like pretty rapid growth to me! It’s hard to grow that fast when you are already at that scale.

Jouni Heikniemi's avatar

Agreed, but I also think that’s beside the point. 45 billion is far too little revenue to be seen from a field that is called so hot.

I don’t know if you agree with Zitron et al.’s assessment that cumulative revenue from AI services for 2026-2030 should be 2 trillion USD for the capex investment rate to make sense. Even if you believe that OpenAI and Anthropic are only a half of the pie (I.e. 90 bn AI revenue for 2026), and you accept that 50 % CAGR baseline for the next five years - which is, I agree, a staggering figure to maintain - you still wouldn’t make it to cumulative $2T.

I don’t disagree with your prediction - I believe 1.5xing the revenues from 2025 is doable. Just that I don’t see those low-ish numbers as good evidence for the market’s searing heat right now.

Zoot's avatar

Why does this not surprise me. Great stuff.

Tim Kilpatrick's avatar

Well done. Thanks Tim for pulling this list together.

David Watson's avatar

Looks like Isaac King went ahead and created Manifold prediction markets for all of these, which he says he'll resolve based on whatever is published here a year from now.

Dunno if anyone else is into play-money prediction markets, a few show significant disagreement with the authors of the predictions here

A big disagreement that surprised me was the 20 hour coding task one

https://manifold.markets/IsaacKing/ai-models-will-be-able-to-complete?r=RGF2aWRGV2F0c29u

I'm not sure how to decide whether the anti-ai superpac is likely, but Manifold sure thinks it isn't

https://manifold.markets/IsaacKing/there-will-be-an-antiai-super-pac-t?r=RGF2aWRGV2F0c29u

I was a bit less surprised that the manifold folks (including me) find the compound predictions a bit less likely than the authors

https://manifold.markets/IsaacKing/the-first-fully-autonomous-vehicle?r=RGF2aWRGV2F0c29u

https://manifold.markets/IsaacKing/news-coverage-linking-ai-to-suicide?r=RGF2aWRGV2F0c29u

Kai Williams's avatar

When I check at this point, the 20 hour coding task one is at around 50% it looks like in a couple of different markets. Though there's a good chance it won't be adjudicable: Manifold currently has a 36% chance that METR says that the 50% time horizon is 'ambiguous': https://manifold.markets/Bayesian/will-the-metr-50-time-horizon-be-am

There's not a lot of liquidity in the anti-ai super pac one, but I wonder if there's also uncertainty around resolution criteria.

Michel Justen's avatar

> there are at least two well-funded pro-AI super PACs. [...] Meanwhile, there’s no equally organized counterweight on the anti-AI side.

What about Public First? I'm curious if they meet your criteria @charlieguo.

They're not exactly "anti-AI", more pre-regulation. They describe themselves as a "a coalition that puts the public interest first and ensures that AI innovations benefit all Americans while guarding against risks to our children, jobs, national security, and humanity." Probably more politically palatable than "anti-AI" in some circles. (https://publicfirstaction.us/news/chris-stewart-brad-carson-announce-new-organization-and-bipartisan-super-pacs-to-support-ai-safeguards)

If they meet your criteria, I agree with your 70% prediction that such a super PAC will raise >$30 million. (Especially if Anthropic IPOs and employees can donate.) It'll be a wild year watching AI super PACs battle it out.

Charlie Guo's avatar

Yes, I’d consider Public First as meeting the criteria. “Anti-AI” was probably too glib of a descriptor, in practice I don’t think anyone is going to try and put the AI genie back in the bottle, but rather push policy restricting content creation and data center construction (among other things).

Sounds like you should put some money on Manifold then - they disagree with the 70% probability.

Chris Luehmann's avatar

AI is getting closer to getting the ability to bootstrap the soul. For real. Please break my argument -- https://open.substack.com/pub/crluehmann/p/the-mixtape-of-science?utm_campaign=post-expanded-share&utm_medium=web

Sydney Olson's avatar

I could definitely see prediction 13 coming to fruition. The lag in policy creation on these technologies is bound to catch up at some point. These AI systems need balances. https://open.substack.com/pub/sydneygolson/p/engineering-perspective-is-ai-going-to-destroy-the-world?r=3gh4a4&utm_medium=ios