← All Episodes
Based on Lenny's Podcast data
Lenny's Knowledge Sketch

What AI Practitioners
Actually Believe

Aishwarya Naresh Reganti & Kiriti Badam
AI researchers & practitioners
JAN 11 2026
The Survey

The Gaps Between
AI Hype and Practice

BELIEVE ITUSING ITSKEPTICALIGNORING IT
"If you make practitioners sit together and ask them what's real — the answers are very different from the conference keynotes."
  • Most AI practitioners are more optimistic than public discourse
  • The biggest gap: evals — everyone knows they need them, few do them well
  • Deployment != adoption: shipping AI features ≠ users actually changing behavior
  • Safety concerns are real but nuanced — practitioners vs. researchers diverge
Framework

Practitioner Reality Check

Using AI daily82%Confident in evals28%Have prod AI in apps61%Satisfied with outputs44%Measuring AI ROI35%
  • The eval gap: critical skill, widely acknowledged, poorly executed
  • Deployment gap: 60% have shipped AI; <40% have users relying on it daily
  • Trust gap: practitioners trust AI output less than leadership does
  • Measurement gap: most AI ROI is qualitative, not quantitative
The honest findingAI is genuinely useful. The hype is about capabilities; the gap is in safe, reliable deployment at scale.
What Practitioners Know

The Real State of AI in 2026

  • Works well: Generation, summarization, classification at scale
  • Works with care: Reasoning chains, multi-step workflows, coding assistance
  • Hard: Consistent factual accuracy, complex reasoning, novel domains
  • Underappreciated: Prompt brittleness — small changes cause large output swings
The evals blindspot

Most teams ship AI features with vibes-based QA. That's fine for v0.1; it's not fine at scale.

The adoption reality

AI features ship. AI behavior change doesn't. User re-education is the hardest part of AI PM.

Playbook

Close the Practitioner Gap

  • Build an eval framework before your 2nd AI feature, not your 5th
  • Define "good enough" explicitly — vague quality targets produce vague quality
  • Measure user behavior change, not just feature usage
  • Create a practitioner community inside your org — share what works and what doesn't
The community findingThe best AI practitioners share notes obsessively. Isolation is the enemy of good AI practice.
Contrarian

AI Research vs Practice Myths

Research papers = production realityINSTEAD →Research papers are existence proofs. Production reality is 10× harder.
Bigger model = better productINSTEAD →Better evals = better product. Model size is one lever; evaluation is all the levers.
AI is deterministicINSTEAD →AI is probabilistic. Design your product around this, not despite it.
Safety is a research problemINSTEAD →Safety is a deployment problem. Practitioners own it as much as researchers.
𝕏︎ X / Twitterin LinkedIn📸 Instagram🔗 Copy link
0:00