Building LeSearch: From Papers to Action

Every researcher knows the pain. You find a promising paper title. You download the PDF. You skim the abstract. You realize it's irrelevant. Repeat 50 times.
When you finally find the right paper, extracting the data into a usable format is a manual nightmare.
The Hypothesis
I believed that LLMs could fix this funnel. Not just by "chatting with PDF"—that's table stakes now—but by understanding the relationships between papers. If Paper A cites Paper B, and Paper B refutes Paper A, an AI should be able to surface that tension immediately.
Building LeSearch AI
The Beta
I hacked together the MVP in a weekend using Next.js and Supabase. The goal was simple: Upload a PDF, get a structured summary, and see a graph of related concepts.
I launched it to a small group of grad students and policy analysts.
What worked:
- The Knowledge Graph visualization was a hit. People loved seeing how concepts connected visually.
- Guided Briefs: Instead of a generic summary, the AI generated a specific "Policy Brief" or "Technical Memo" based on user intent.
What failed:
- Hallucinations: Early RAG implementations were too confident. I had to implement strict citation grounding—if the AI couldn't highlight the sentence in the PDF, it wasn't allowed to say it.
The Pivot
After 50+ beta users, I realized the real value wasn't in reading papers, but in synthesizing them. Users didn't want a faster reader; they wanted a research assistant that could say, "Here are the 3 papers that disagree with your hypothesis."
We are now rebuilding LeSearch with this "Synthesis Engine" at the core. It's leaner, faster, and focused entirely on moving from "Saved PDF" to "Actionable Insight."