Academic fraud in science is becoming a big problem. For the integrity of our academic institutions and science itself, something must be done to discourage these shady practices.

The term “fraud” in an academic context has different connotations than the everyday use of the word — someone using deception to illegally gain a financial incentive. Scientific fraud sees the perpetrator gain academic acclaim through deceit, dishonesty and false representation.

But there’s also a growing trend of bibliometric manipulation. This includes practices such as self-citation, citation cartels or coercive citation. These practices are problematic because citation is the currency by which academic journal articles — and the authors who write them — demonstrate their standing. The more other researchers cite your article, the more influential it is. When the number of times an author or a journal are referenced is artificially inflated, that can skew what science is perceived to be “important” in the field.

Other forms of author and journal misconduct are also troubling the sector. In fake peer review, authors suggest the names of peers to review their papers but supply contact details that are fake. If journal editors are not careful about checking the details, this essentially allows the submitting authors to write their own reviews. The practice of “gift authorship” sees academics add the names of friends or colleagues as authors to their papers, even though they haven’t contributed to the work, allowing them to artificially inflate their publication numbers and citation counts.

Kit Yates

Kit Yates is a professor of mathematical biology and public engagement at the University of Bath in the U.K.

There are even entirely ghost-written papers whose named “authors” have had little or nothing to do with the paper at all.

In 2023 the number of papers that made it through “peer review” and were ultimately published but then retracted because they were found to be fraudulent topped 10,000 for the first time. And these papers that were actually discovered to be fraudulent may represent just the tip of the iceberg. Some authors have suggested that as many as one in seven scientific papers are fake although estimates vary.

To some extent academia has brought these problems on itself through the increased reliance on metrics to judge an academic, a journal or an institution’s performance. H-indices — a measure of the number of papers an academic has published and how often they have been cited (for example I have an H-index of 28 meaning I have 28 papers which have each been cited at least 28 times) and even cruder metrics like numbers of publications or citation counts are used as proxies for influence.

The decline of the audience-funded model has meant that the quality of articles is no longer a critical issue for some cynical journals. Even if no-one ever reads them, the money is in the bank.

Hiring and promotion committees often use these benchmarks as shorthand for academic quality, meaning that an academic’s job prospects and career progression can depend very strongly on these numbers.

For journals, the impact factor — measuring the average number of citations of each article they publish each year — is a similar metric used to compare quality between publications. Not only does this bring prestige to a journal, but it also attracts better quality submissions forming a positive feedback loop.

The problem with these metrics-cum-performance indicators is that they are gameable by the unscrupulous and the desperate. This is a classic example of Goodhart’s law which states: “When a measure becomes a target, it ceases to be a good measure.”

These metrics provide perverse incentives for academics to publish as much as possible as quickly as possible, with as many self-citations as they can get away with — sacrificing quality and rigor to the gods of quantity and speed.

Adding in a bunch of references to your own papers (whether relevant or not) and getting a cartel of cooperating colleagues to do the same in their papers is one way of inflating these statistics. It might seem relatively harmless, but stuffing the references section with irrelevant papers makes the paper more difficult to navigate, ultimately degrading the quality of the science presented.

For a recent paper I submitted, one of the referees tasked with checking the paper over before its acceptance for publication requested that I cite a whole bunch of completely irrelevant papers. As a senior academic I felt confident enough to complain to the journal about this referee, but more junior colleagues, for whom that publication might mean the difference between getting the next job or not, might not have felt comfortable complaining. If that journal has integrity, that referee should be scratched from their list, but some journals have fewer scruples than others.

Recent years have seen a move away from the traditional model of academic publishing, where journals make their money by charging end-users to access their articles, and towards an “open-access” model of publishing. On the face of it, open access democratizes research by allowing the public, who are often (if indirectly) the funders of the research through government grants, to access it for free. This is why research funders often provide universities with funding for the “article processing charges” (usually measured in the thousands of dollars) which they then pay to the journals to make the published articles freely available.

But this move towards open access has provided another perverse incentive. The decline of the audience-funded model has meant that the quality of articles is no longer a critical issue for some cynical journals. Even if no-one ever reads them, the money is in the bank and the citation metrics are automatically harvested. The incentive for unscrupulous journals and academics is to publish as many papers as possible as quickly as possible. Inevitably the quality and the reputation of science suffers as a result.

Tackling scientific fraud

So what can be done to reverse the trend of the pervasive and increasing threat of scientific fraud? A two-part report commissioned by the International Mathematical Union (IMU) and the International Council of Industrial and Applied Mathematics (ICIAM) has come up with some suggestions of how we might fight back.

Starting at the top, policy makers, ranging from politicians to funding bodies, should encourage the move away from gameable metrics, including university rankings, journal rankings, impact factors and H-indices. Funding decisions in particular should be decoupled from these numbers.

At an institutional level, research organizations need to discourage the use of bibliometrics in promotion and hiring or else risk low quality scientists who game the system rising above their more diligent colleagues. Institutions can also vote with their feet by deciding which article processing charges to pay, denying the predatory journals their main source of funding.

A big part of the problem is simple lack of awareness amongst scientists and those who work alongside them. Institutions should be doing more to educate their researchers and research administrators about fraudulent academic practices.

Of course, much of the responsibility to reduce academic fraud has to lie with the researchers themselves. This means choosing carefully which editorial board to join, which journals to submit work to and which to undertake peer review for. It also means speaking out when encountering predatory practices, which is easier said than done. Many of those who speak out against predatory practices choose to do so anonymously for fear of reprisals from publishers or even their peers. Consequently, we must also foster a culture in which whistleblowers are protected and supported by their institutions.

Ultimately whether good science is swamped by an ever rising quagmire of poor quality studies or whether we are able to turn back the tide depends on the integrity of researchers and the awareness of the organizations which facilitate and fund it.


Opinion on Live Science gives you insight on the most important issues in science that affect you and the world around you today, written by experts and leading scientists in their field.

Share.

Leave A Reply

Exit mobile version