Can one person really publish over 100 AI papers in a year?
Kevin Zhu, a recent Berkeley graduate who runs a mentoring company for high schoolers, claims he can.
He says 89 of those papers will be presented this week at one of the world’s top AI conferences, NeurIPS.
But many in the AI research world are raising eyebrows.
Zhu’s company, Algoverse, charges students thousands for a 12-week program that helps them produce papers for conferences.
He insists he “supervises” the work, reviewing methodology and drafts, while students and mentors handle the rest.
AI tools are occasionally used for editing. But critics call the output “a disaster.”
AI Research Overload
Hany Farid, a Berkeley computer science professor, says, “I’m fairly convinced that the whole thing… is just vibe coding.”
He is referring to the practice of using AI to churn out software or research without meaningful contribution.
The problem isn’t just Zhu. AI conferences are swamped.

NeurIPS received over 21,000 submissions this year, double what it got in 2020.
Reviewers complain about poor quality, AI-generated content, and rushed peer review.
Farid warns that students chasing publication numbers risk producing low-quality floodwork.
Thoughtful research struggles to get attention.
So, what does this mean for AI? As Farid puts it bluntly: “You can’t keep up, you can’t publish, you can’t do good work, you can’t be thoughtful.”
In a field exploding with hype, the signal is buried in the noise — leaving everyone asking, who’s actually making real progress?


