Back to Blog

AI Predictions from Top Podcasts: A 2026 Reality Check

February 24, 2026
Research
AI Predictions from Top Podcasts: A 2026 Reality Check

Between February 2024 and January 2025, the guests on top AI podcasts made hundreds of claims about the future. Most were vapor — vague directions, hedged hypotheticals, things no one could ever check. When we ran 350 hours of AI podcast audio through a prediction-extraction pipeline, only 15% of segments that sounded predictive contained an actual, falsifiable claim. The other 85% was rhetorical filler dressed up as forecasting.

We found 18 real predictions about AI. Of the five we can score so far (out of seven), nobody was wrong. Though the honest version of that headline is less impressive than it sounds: the scorable predictions were mostly safe bets, and the bold ones haven't come due yet.

110 Episodes, 11 Podcasts, 18 Real Predictions

We transcribed and speaker-diarized 110 full episodes from 11 of the most prominent AI-focused podcasts: Dwarkesh Podcast, Latent Space, Cognitive Revolution, No Priors, Lex Fridman, All-In, 80,000 Hours, Exponential View, Eye on A.I., Machine Learning Street Talk, and TWIML AI. That's roughly 350 hours of audio covering the year after ChatGPT broke into the mainstream — when AI hype was at its most feverish.

We ran the transcripts through a keyword extraction pipeline, flagging segments that contained a timeframe keyword ("by 2025," "in five years," "within the next decade") combined with predictive language. That produced 140 candidate segments. After filtering out hypotheticals, questions, retrospective observations, quotes attributed to others, and vague directional statements, 21 genuine falsifiable predictions survived. We set aside 3 that weren't about AI (more on those below), leaving 18 predictions focused on the technology itself.

The Scorecard

Seven of the 18 predictions target years that have already arrived or are imminent (2024–2027). The remaining 11 target 2028 and beyond. We've added a difficulty column — because being right about a safe bet isn't the same as being right about a bold one.

Already Checkable (2024–2026)

Prediction Who Year Difficulty Verdict
Enterprises use AI in many more use cases Itamar Friedman 2024 Safe Correct
AI moves from individual to team/org tool Ethan Mollick 2025 Medium Correct
AI models are "amazing AI researchers" Leopold Aschenbrenner 2026 Bold Too early

Nearly Due (2027)

Prediction Who Year Difficulty Verdict
~$1 trillion in cumulative AI investment Leopold Aschenbrenner 2027 Safe On track
AGI capability "level 5.5" Leopold Aschenbrenner 2027 Bold Too early
10% chance of HLMI (AI researcher survey) Katja Grace 2027 Correct so far
AI co-piloting phase for everything Ylli Bajraktari 2027 Safe On track

Of the 5 we can score today: 3 correct, 2 on track, 0 wrong. Two bold predictions from Aschenbrenner haven't reached their deadlines — the "amazing AI researchers" call has 10 months left, the "level 5.5" call has nearly two years. The Grace survey is probabilistic: it said HLMI by 2027 was a 10% shot, and so far that looks right.

Five for five sounds impressive until you check the difficulty column. The "correct" and "on track" verdicts cluster around safe and medium-difficulty calls — describing momentum already visible when the predictions were made. The only genuinely bold near-term predictions are the two we can't score yet.

The Predictions, In Their Own Words

What They Got Right

Enterprise AI adoption. Itamar Friedman is the CEO of Codium AI (now Qodo), a Y Combinator-backed coding assistant company. On the Cognitive Revolution in March 2024, he made the safest prediction in our dataset:

"Don't be so surprised if, by the end of 2024, you will use AI in many [...] other cases." — Itamar Friedman, Cognitive Revolution, March 12, 2024 [18:03]

(Quote lightly edited for clarity from automated transcription.)

Verdict: Correct. Difficulty: Safe. By late 2024, AI had spread from coding assistants into customer service, legal review, marketing copy, document processing, and dozens of other enterprise workflows. The wave was already visible when he said it. This is less a prediction than a description of momentum — but it meets our criteria (specific claim, named timeframe, falsifiable outcome), so it stays.


AI as organizational tool. Ethan Mollick is a Wharton professor and author of Co-Intelligence — the guy who makes his MBA students use ChatGPT for every assignment. On Exponential View in December 2024:

"I think it's going to move from the individual productivity tool to the team group organizational one in 2025." — Ethan Mollick, Exponential View, December 11, 2024 [7:58]

Verdict: Correct. Difficulty: Medium. Microsoft Copilot for 365, Salesforce Agentforce, and Google Workspace AI all shifted their pitch from individual users to organizational deployment in 2025. Mollick called the inflection point almost exactly. This is the strongest result in the dataset — a specific timing call that wasn't obvious when he made it.


The co-piloting phase. Ylli Bajraktari is a former national security advisor and CEO of the Special Competitive Studies Project. On Eye on A.I. in April 2024:

"I think in the next three years, we're going to live in this co-piloting phase where everything we do is with these models." — Ylli Bajraktari, Eye on A.I., April 3, 2024 [22:21]

Verdict: On track. Difficulty: Safe. We're two years in and this description fits. AI copilots are embedded in coding, writing, search, design, and business workflows. "Everything we do" is an overstatement, but the framing of a co-pilot era has proven useful. Difficulty is low because copilots were already proliferating when he said it — calling the trend a "phase" added framing, not forecasting.


AI investment trajectory. Leopold Aschenbrenner was a 22-year-old ex-OpenAI researcher when he sat down with Dwarkesh Patel in June 2024 for a now-famous three-hour interview laying out his "Situational Awareness" thesis. On the money:

"A trillion dollars of [...] total AI investment by 2027 [...] we're very much on track on it." — Leopold Aschenbrenner, Dwarkesh Podcast, June 4, 2024 [5:08]

Verdict: On track. Difficulty: Safe. Microsoft, Google, Amazon, and Meta collectively announced well over $200 billion in AI capital expenditure for 2025 alone. Cumulative investment hitting $1 trillion by 2027 looks like a near-certainty. Aschenbrenner himself frames this as trend extrapolation ("we're very much on track"), not a contrarian call. He's describing the trajectory as it existed — more narration than forecast.


10% chance of HLMI by 2027. Katja Grace founded AI Impacts and runs the largest recurring survey of AI researchers. On the Cognitive Revolution in March 2024, she summarized results from surveying 2,700 researchers:

"The survey gives a 10% chance of high level machine intelligence by 2027." — Katja Grace, Cognitive Revolution, March 21, 2024 [48:24]

Verdict: Correct so far. The researchers said HLMI by 2027 was unlikely — a 10% shot. As of February 2026, we don't have HLMI. The 90% majority is looking well-calibrated. We don't assign a difficulty rating to probabilistic survey results. The value is in calibration, not boldness. If HLMI still hasn't arrived by 2028, this will stand as strong evidence of good collective forecasting.

Where We're Calling It

"Amazing AI researchers" by 2026. Aschenbrenner, in that same Dwarkesh interview:

"But some of them by 2026 will be amazing AI researchers. Why aren't they making that bet?" — Leopold Aschenbrenner, Dwarkesh Podcast, June 4, 2024 [171:54]

Verdict: Too early. Difficulty: Bold. It's February 2026. AI models are powerful research assistants — capable of literature review, code generation, data analysis, and hypothesis refinement. But "amazing AI researcher" implies autonomous generation of novel insights, original experimental design, the kind of work that advances a field. We're not there. The bar for "amazing AI researcher" should be higher than "really useful research assistant." He has ten months left.


AGI capability "level 5.5" by 2027. Also Aschenbrenner:

"And so I guess that's like 5.5 level by 2027, whatever that's called." — Leopold Aschenbrenner, Dwarkesh Podcast, June 4, 2024 [20:50]

Verdict: Too early. Difficulty: Bold. In his "Situational Awareness" document, Aschenbrenner outlines a capability progression from chatbot-level systems at the low end through expert reasoning and autonomous coding, up to systems that can independently conduct AI research at the high end. "Level 5.5" sits near the top — roughly, AI that can autonomously design experiments, identify new scaling laws, and architect novel training paradigms without human direction. As of early 2026, the best models are powerful assistants in these tasks but don't drive the research agenda themselves. The gap between "impressive copilot" and "autonomous research contributor" remains significant. He has nearly two years, and progress is fast. But this is the boldest checkable prediction in our dataset.

What Nobody Predicted

The near-term predictions we can check all pointed in the right direction. But the most important AI developments of 2024–2025 were largely absent from our dataset:

The reasoning model paradigm. OpenAI's o1 (September 2024) and subsequent reasoning models introduced inference-time compute scaling as a major capability axis — spending more time "thinking" rather than just training larger models. Nobody in our 110 episodes predicted this architectural shift, even though it changed how the field thinks about scaling.

DeepSeek and open-weight disruption. A Chinese lab releasing competitive frontier models at a fraction of the training cost upended assumptions about the relationship between investment and capability. The trillion-dollar investment trajectory may be on track and simultaneously less important than anyone expected.

The enterprise revenue gap. Despite rapid adoption, most enterprises struggled to show clear ROI from AI deployments through 2025. The "AI is everywhere" predictions were correct about adoption but silent on whether adoption was translating into business value.

These aren't failures of individual forecasters — most were asked about specific topics and responded accordingly. But they reveal a structural limitation of prediction-tracking: the most consequential developments are often the ones nobody thinks to ask about.

What's Still Coming (2028–2044)

Eleven predictions target dates too far out to check. The boldest:

10-gigawatt data centers by 2028. Aschenbrenner predicted data centers of unprecedented scale:

"We're going to be in 2028 building the 10 gigawatt data centers." — Leopold Aschenbrenner, Dwarkesh Podcast, June 4, 2024 [52:36]

The largest campuses in development as of 2026 are in the 1–5 GW range. A 10 GW facility would draw roughly as much power as ten nuclear plants. Difficulty: Bold. This requires a step change in power infrastructure and permitting, not just continued investment.

Photographs inadmissible as court evidence by ~2029. On Machine Learning Street Talk in May 2024, during a discussion about deepfakes and trust in media:

"Up until five years ago, a photograph would have been admissible as evidence in court. And in five years' time, it won't be. And five years ago, an image of Putin saying something or a video, we would have believed it. In five years' time, we won't believe it." — Machine Learning Street Talk, May 6, 2024 [70:42]

Deepfake concerns have intensified, but courts are moving toward stricter authentication requirements, not blanket inadmissibility. This prediction confuses the direction of adaptation — courts are adding guardrails, not abandoning visual evidence. Difficulty: Bold. Early read: likely wrong.

More software engineers, not fewer. Francois Chollet (creator of Keras, Google researcher, and one of AI's most prominent skeptics) on the Dwarkesh Podcast in June 2024:

"In five years, there will be more software engineers than there are today." — Francois Chollet, Dwarkesh Podcast, June 11, 2024 [33:25]

Through 2025, AI coding tools boosted productivity without causing net job losses. The Jevons paradox (efficiency gains increasing total demand) appears to be holding. Difficulty: Medium. Contrarian to the dominant "AI replaces coders" narrative, but supported by historical patterns.

AI infrastructure delayed by 5 years. Azeem Azhar (author of The Exponential Age, veteran technology analyst) on Exponential View in November 2024:

"The things that they think they're going to be able to deliver by 2030 are probably 2035." — Azeem Azhar, Exponential View, November 28, 2024 [56:09]

Given the history of large-scale infrastructure projects and growing bottlenecks in power, permitting, and chip manufacturing, this may be the most contrarian and most realistic prediction in our dataset. Difficulty: Bold.

Automation pace stays gradual. Robin Hanson (economist at George Mason University, author of The Age of Em) made the longest-range prediction in our set on the Cognitive Revolution in February 2024, arguing the next 20 years of automation will look much like the last 20: steady, not transformative. Difficulty: Bold. Directly contradicts the accelerationist consensus.

Other future predictions include: AI decision-making remaining opaque to users through 2028 (Katja Grace's AI researcher survey, Safe), US power production growing significantly by 2029 (Aschenbrenner, Medium), smartphones as full AI assistants by 2029 (Terry Sejnowski, Salk Institute pioneer and co-inventor of Boltzmann machines, Safe), new solar capacity matching total global energy capacity by 2029 (Exponential View, Bold), 35 of 39 surveyed AI tasks achievable within 10 years (Grace survey, Medium), and a shift to a different compute platform by 2034 (All-In hosts, Bold).

Off the Record: The Non-AI Predictions

Our pipeline caught 3 predictions that met all our criteria but weren't about AI. We cut them from the main analysis to keep the focus on technology — but we're not hiding them. All 3 came from All-In hosts:

Prediction Year Current Status
US surpasses $1 trillion in annual debt interest payments 2026 On track
Social Security becomes de facto bankrupt 2033 TBD
Cost of generating energy becomes "essentially free" 2034 Unlikely as stated

The debt interest prediction is tracking — US net interest payments exceeded $880 billion in fiscal 2025. The energy prediction confuses marginal generation costs (approaching zero for solar) with delivered energy costs (which include transmission, storage, and grid integration). Social Security is too far out to evaluate. None of the 3 were excluded because they were inconvenient. They just weren't about AI.

What We Learned

The 15% rule. Only 15% of segments that sound like predictions actually are predictions. The rest are hypotheticals, questions, attributions, or rhetorical devices. Podcast conversations are built on speculative scaffolding that mimics predictive language without committing to anything. "What if we get AGI in two years?" sounds bold but predicts nothing.

Leopold Aschenbrenner dominates. Five of the 18 predictions (28%) came from a single guest in a single episode — Aschenbrenner on the Dwarkesh Podcast in June 2024. His "Situational Awareness" thesis produced more concrete, falsifiable claims in one three-hour conversation than most podcasts generated across ten episodes. Agree or disagree with his timeline, he put specific stakes in the ground. That alone sets him apart.

The safe predictions were right. The bold ones haven't come due. Every near-term prediction we scored "correct" was also rated safe or medium difficulty. The two bold predictions, both from Aschenbrenner, are the ones still pending. This pattern means we can't yet distinguish genuine forecasting skill from momentum-reading. The 2027 cluster will be the real test.

Nobody predicted a miss. Every checkable prediction skews optimistic and correct. There are no predictions of AI disappointment, capability stagnation, or a funding pullback anywhere in our data. This almost certainly reflects survivorship bias: guests are selected for bold views, not caution. Pessimists who thought 2024–2026 would bring an AI correction either weren't invited on these shows or didn't phrase their skepticism as falsifiable predictions. Our dataset captures what was said on 11 popular podcasts — not the full range of informed opinion about AI.

18 is a small number. We want to be upfront: 18 predictions from 11 podcasts is a useful snapshot, not a statistical sample. We can observe patterns (a single guest dominates, bearish predictions are absent, everything clusters around 2027) but we can't generalize about AI forecasting accuracy from this dataset alone. The value is in the specific claims and whether they held up, not in any aggregate score.

The real test is 2027. The heaviest cluster of predictions targets 2027 — when Aschenbrenner's AGI timeline, the AI researcher survey's HLMI window, and the trillion-dollar investment threshold all converge. If 2027 arrives without a dramatic capability jump, several of the most prominent voices in AI forecasting will have been early. Not wrong, exactly. But early, which in predictions amounts to the same thing.

How We Did This

Audio collection. 110 episodes downloaded from RSS feeds of 11 AI-focused podcasts, covering February 2024 through January 2025.

Transcription and diarization. All episodes were transcribed using WhisperX (small model) with speaker diarization via pyannote 3.1, running on a local NVIDIA RTX 4090. Total processing time: approximately 8 hours for 350+ hours of audio.

Candidate extraction. We scanned 113,425 transcript segments for timeframe keywords ("by 2025," "in five years," "next decade," etc.) combined with predictive verb patterns. This produced 140 candidate segments.

Judgment. Each candidate was evaluated in a single pass by one reviewer against three criteria: (1) specific claim, not just a direction, (2) named timeframe, and (3) falsifiable outcome. Hypotheticals, questions, retrospective statements, and attributions to others were rejected. 21 of 140 candidates passed (15%); 3 non-AI predictions were separated into their own section. No inter-rater reliability check was performed. A second reviewer scoring the same 140 candidates could plausibly accept anywhere from 15 to 30, depending on how strictly they draw the line between "specific claim" and "vague direction." Our 15% rate is one data point, not ground truth.

Podcast selection. Our 11 podcasts were chosen for prominence and AI focus, not randomly sampled. This means our dataset reflects what the most-listened-to AI commentators said, which skews toward optimism and insider perspectives. Bearish predictions are underrepresented — whether because pessimists don't get booked on these shows, or because they frame their skepticism in ways that don't trigger timeframe keywords.

Limitations. Keyword-based extraction catches only predictions with explicit timeframe language. A guest who says "GPT-5 will be a massive leap" without naming a date won't appear in our dataset. Whisper transcription is imperfect, particularly with proper nouns and technical terminology — some quotes may contain minor transcription artifacts. Speaker diarization occasionally misattributes quotes in multi-speaker segments. All timestamps are approximate, accurate to within ±30 seconds.

Source Episodes

Every prediction in this post can be verified against the original audio:

Podcast Episode Date Timestamps
Cognitive Revolution Flow Engineering and Code Integrity at Scale with Itamar Friedman Mar 12, 2024 18:03
Exponential View What AI Holds for Us in 2025, with Ethan Mollick Dec 11, 2024 7:58
Eye on A.I. #179 Ylli Bajraktari: AI and National Security Apr 3, 2024 22:21
Dwarkesh Podcast Leopold Aschenbrenner — 2027 AGI, China/US Super-Intelligence Race Jun 4, 2024 5:08, 20:50, 52:36, 171:54, 252:46
Cognitive Revolution Surveying 2,700+ AI Researchers with Katja Grace Mar 21, 2024 45:00, 48:24, 63:46
Dwarkesh Podcast Francois Chollet — Why the Biggest AI Models Can't Solve Simple Puzzles Jun 11, 2024 33:25
ML Street Talk CAN MACHINES REPLACE US? — Maria Santacaterina May 6, 2024 70:42
Exponential View Why AI, Solar & Batteries Will Keep Getting Cheaper Nov 28, 2024 56:09
Cognitive Revolution Is AGI Far? With Robin Hanson Feb 27, 2024 59:40
Exponential View A Case for Optimism with Chris Anderson Mar 8, 2024 66:25
Eye on A.I. #178 Terry Sejnowski on Human Development Principles in AI Mar 27, 2024 25:47
All-In E171: DOJ sues Apple, AI arms race, Reddit IPO Mar 22, 2024 10:58
All-In E172: SBF gets 25 years Mar 29, 2024 48:46
All-In Meta's scorched earth approach to AI Apr 26, 2024 33:26, 91:18

We'll Be Back

We plan to re-score every prediction in this dataset in January 2028, when the 2027 cluster comes due. By then, we'll know whether Aschenbrenner's trillion-dollar threshold materialized, whether the AI researcher survey's 10% HLMI window opened, and whether we're still in Bajraktari's co-piloting phase or have moved beyond it.

The predictions, transcripts, and methodology are all reproducible. If we missed your favorite AI prediction, it probably failed the specificity test.

Join the Critical Conversation

Get my latest podcast critiques and industry analysis delivered to your inbox. No fluff, just the good stuff.