By Wonsuh Song
Recently, Science magazine published a striking article on its SCIENCE INSIDER-SCIENTIFIC COMMUNITY page. Titled “Scientific fraud has become an ‘industry’,” it introduced a disturbing study: “The entities enabling scientific fraud at scale are large, resilient, and growing rapidly,” authored by Reese A. K. Richardson and his team at Northwestern University. Published in PNAS (Proceedings of the National Academy of Sciences), the research uncovered the deeply rooted and expanding architecture of fraud in academic publishing.
PNAS is one of the world’s most prestigious multidisciplinary journals, published by the U.S. National Academy of Sciences since 1914. It is known for rapid and rigorous peer review, and for disseminating impactful research across biology, physical sciences, and social sciences.
This study goes beyond exposing simple “paper mills.” It reveals international networks involving brokers, hijacked journals, fake conferences, and even internal editors at major publishing houses. At PLOS ONE, for example, a small group of editors handling only 1.3% of total submissions was responsible for over 30% of the journal’s retracted papers—often assigning manuscripts to each other in a closed editorial loop.
The team traced signs of mass-produced fake research using duplicate images and suspicious writing patterns. ARDA, a broker based in Chennai, India, was identified as a key player in this ecosystem, guaranteeing publication in over 70 journals—many already delisted from Scopus. Their portfolio shifted over time, evading indexing systems.
Notably, this research predates the widespread adoption of generative AI tools like ChatGPT, Perplexity, and Gemini. Today, such tools are being used not only to write but even review papers, making it harder than ever to distinguish genuine scholarship from fraudulent work.
I currently teach at a university in Japan and have past experience serving as an international academic journal editor. As a researcher, I’ve faced countless delays, rejections, and overwhelming revisions. As an editor, I struggled to recruit reviewers—often emailing ten researchers to get just one response. Many agreed but failed to submit reviews, and follow-up emails often went unanswered.
The most concerning issue is that all of this labor is unpaid. Reviewers and editors receive no compensation. This lack of incentives undermines accountability and slows the entire system. Top-tier researchers often decline requests, considering peer review a waste of time. The structure is already overstretched, and now faces new threats from AI-driven fraud.
Why does this system persist? Because academic institutions still rely heavily on simplistic metrics—publication counts, impact factors—to assess a researcher’s worth. In an era where such numbers no longer reflect true scholarly depth, we continue to worship them. Meanwhile, real science is distorted, trust erodes, and ethics crumble.
Richardson’s team doesn’t just criticize—they offer proof. Using hundreds of thousands of metadata records, they expose a structural phenomenon they term “collaborative defection,” where insiders exploit the system for mutual benefit.
The current peer review model is underfunded, fragile, and reactive. With fraudulent science rising faster than integrity measures can keep up, and with AI blurring the lines further, we must act.
So I ask: will we continue measuring research by quantity? Or will we evolve to value integrity, education, social impact, and ethical contribution?
It is time to shift from counting papers to assessing real science.
Wonsuh Song (Ph.D.)
Lecturer, Shumei University / NKNGO Forum Representative











댓글 남기기