- A psychology professor in Norway discovered that a paper he was invited to review cited his own work—except that the cited research did not exist, clearly illustrating the “hallucinated citation” phenomenon caused by generative AI.
- This phenomenon is spreading across academia, from prestigious journals to policy reports, showing that generative AI is eroding the credibility of scientific publishing.
- The volume of submissions to journals has spiked since large language models became popular, driven by both legitimate productivity gains and organized fraud.
- “Paper mills” sell mass-produced research papers that reuse text and image templates, particularly in fields like cancer research, blockchain, and AI.
- AI does not only write text but also generates fake scientific images, such as histology, electrophoresis gels, and even misleading biological illustrations that still pass peer review.
- Some major AI conferences have seen submissions double in five years; over 50 papers containing fabricated citations have slipped through the review process.
- Approximately 50% of reviews at some conferences are written with AI assistance, and about 20% are generated entirely by AI.
- Preprint servers like arXiv, bioRxiv, and medRxiv are also seeing a wave of AI papers, including cases where previously unpublished authors submit up to 50 papers a year.
- If the “noise” ratio exceeds the “signal,” the scientific community faces an existential crisis where real knowledge is submerged.
Conclusion: Some major AI conferences have seen submissions double in five years; over 50 papers containing fabricated citations have slipped through the review process. Over 50% of reviews at some conferences are AI-assisted, with 20% entirely AI-generated. Journal submissions have spiked following the popularity of LLMs, due to both legitimate efficiency and organized fraud. Science faces a risk of long-term “cognitive pollution,” where AI writes – AI reviews – and AI learns from the very data trash it created.
