← All Episodes
Based on Lenny's Podcast data
Lenny's Knowledge Sketch

Community Notes: The Algorithm
That Stops Misinformation

Keith Coleman & Jay Baxter
Product Lead & ML Engineer, Community Notes (X)
FEB 2025
The Idea

Community Notes: Crowdsourced Context

USER SEESPOSTWRITESNOTEPEOPLE RATEIF AGREEMENT
"We look for agreement from people who have disagreed in the past. When people who are very polarized actually agree, that's what makes the notes so neutral and accurate."
  • Someone sees a misleading post on X
  • They propose a note with additional context
  • Other users rate that note
  • If people with opposing views agree it's helpful, the note shows to everyone
  • 950,000 contributors worldwide, nearing 1 million
The Algorithm

Bridging-Based Agreement: The Secret Sauce

  • Not majority rules: A simple voting system would just amplify the loudest group
  • Not PageRank: That works for manipulation rings, but polarization was the real problem
  • Bridging agreement: Find notes that people across the political divide both find helpful
  • Matrix factorization: ML algorithm trained with gradient descent to predict ratings
  • Threshold = 0.4: Majority from both sides of any polarized divide must agree
30B
views in 2024
95K
notes shown
2x
growth vs 2023
Why this works

If people on opposite sides of an issue can agree a note is helpful, it's probably accurate, neutral, and well-written. This is anti-manipulation by design.

Design Principles

What Made Community Notes Succeed

  • Open to everyone: No professional fact-checkers gatekeeping. Random selection matters.
  • No exemptions: Elon gets notes. World leaders get notes. Advertisers get notes.
  • Zero clicks: Full note context shows inline, not behind a link. Users see the extra information immediately.
  • Adding context, not deciding truth: Notes are informational, not verdicts. Users make their own minds up.
  • Matching across posts: One note about an AI-generated image gets automatically matched to all instances of that image
The cat vs. dog example

Post said "A Palestinian boy shares his bread with a dog." The note: "That's a cat." Not important in isolation, but proof the system runs by real users, not editorial gatekeepers.

The notification feature

If you liked a post and a note appears later, you get pinged. Closes the gap between false claim and correction—something newspapers could never do.

Operating Model

Radical Simplicity at Scale

  • One Google Doc (4 years old, sometimes breaks Chrome). All coordination lives there
  • Daily team meetings. Talk about what's most important right now, not quarterly plans
  • No Jira, no Asana, no Monday.com. Lightweight. Things can disappear if they become irrelevant
  • One person per function at start: 1 PM, 1 FE, 1 BE, 1 ML, 1 designer, 1 researcher
  • Self-selection only: No one was assigned. Everyone applied to join and was interviewed
  • One decision-maker sponsor (Keith reports directly to Elon)
The lean operating insightKeith: "I don't know if the project would be here if it wasn't for this structure. This thing that changed how the world understands truth wouldn't exist without this setup."
Contrarian

What Everyone Gets Wrong About Community Notes

Need professional fact-checkersINSTEAD →Regular people with a voting track record do this better. They're closer to reality. Users self-select in.
Majority vote solves misinformationINSTEAD →Majority vote just creates partisan wars. You need people from opposite sides to agree.
Heavy process = better safetyINSTEAD →Radical simplicity (one Google Doc!) actually scales. Lean teams move faster, make better decisions.
Can't do this in a polarized worldINSTEAD →Polarization is exactly why this works. People who disagree on everything can agree when something is clearly false.
𝕏︎ X / Twitterin LinkedIn📸 Instagram🔗 Copy link
0:00