“Research misconduct accounts for a small percentage of total funding”: Study

How much money does scientific fraud waste? That’s an important question, with an answer that may help determine how much attention some people pay to research misconduct. But it’s one that hasn’t been rigorously addressed. Seeking some clarity,  Andrew Stern, Arturo Casadevall, Grant Steen, and Ferric Fang looked at cases in which the Office of […]

“Barriers to retraction may impede correction of the literature:” New study

One of the complaints we often hear about the self-correcting nature of science is that authors and editors seem very reluctant to retract papers with obvious fatal flaws. Indeed, it seems fairly clear that the number of papers retracted is smaller than the number of those that should be. To try to get a sense […]

“Why Has the Number of Scientific Retractions Increased?” New study tries to answer

The title of this post is the title of a new study in PLOS ONE by three researchers whose names Retraction Watch readers may find familiar: Grant Steen, Arturo Casadevall, and Ferric Fang. Together and separately, they’ve examined retraction trends in a number of papers we’ve covered. Their new paper tries to answer a question […]

Have you been involved in scientific fraud? Grant Steen wants to hear from you

Regular Retraction Watch readers may find the name Grant Steen familiar. Steen has published a number of important papers on retractions, most recently in PNAS. Recently, he approached us for help with what sounds like another project that is likely to increase our understanding of misconduct in science: Steen wants to gather the stories of those involved in fraud. We’re happy to present his explanation of the project, and his requests:

steen

Grant Steen

Why is there fraud in science?

Scientists believe—or at least profess to believe—that science is a process of iteratively approaching Truth.  Failed experiments are supposed to serve as fodder for successful experiments, so that clouded thinking can be clarified.  Observations that are fundamentally true are thought to find support, while observations that are flawed in some way are supplanted by better observations.

Why then would anyone think that scientific fraud can succeed?  Fraud would seem to be intellectual pyrotechnics; a dazzling light that leaves us in darkness.  If science truly is self-correcting, then why would people risk perpetrating fraud?  The notion of self-correction suggests that fraud is certain to be found out. Why risk it? Or are most scientists wrong?  Does science often fail to self-correct?  Is the literature full of misinformation, left behind like landmines in an abandoned battlefield?

What is the rationale for data fabrication and data falsification?  We invite anyone who has been involved in a scientific retraction due to fraud, or otherwise implicated in scientific misconduct, to write an essay for inclusion in a projected book about scientific fraud.  Essays are solicited from people who were involved as either a perpetrator or a co-author.  It is vital that this account be written from a personal perspective.  Please limit speculation and stick to verifiable facts insofar as possible, so that future historians can learn what actually happened.  Please do not discuss retractions that resulted from an honest scientific mistake, and do not dwell on transgressions such as plagiarism, duplicate publication, or co-author squabbles.  Discussion should focus primarily on data fabrication and data falsification.  We are especially interested in first-person accounts that relate to any (or all) of the following questions:

  • What actually happened?
  • What is the scientific story behind the transgression?
  • How did you (or a colleague) fabricate or falsify data?
  • What was the short- or long-term goal of the deception?
  • Did you perceive any significant obstacles to fabrication or falsification?
  • Did the research infrastructure fail in any way?
  • How was the fraud discovered?
  • Do you believe that the scientific enterprise was damaged?
  • What was the aftermath for you and for your collaborators?
  • What are your thoughts and perceptions now?

Please limit your essays to no more than 3,000 words and send them to G_Steen_MediCC@yahoo.com. Be prepared to prove that you are who you claim to be; we will try hard not be taken in by a scam.  However, it may be possible to publish the piece anonymously, though this would greatly lessen the impact.  If accepted for publication, your work will be edited for clarity only; there will be no censorship, no editorial intrusion, and no correction of what are claimed as facts.  However, these essays will become part of a multi-author dialogue about scientific fraud.  If a book contract can be secured, each essay will form a chapter in the book.  No profits are anticipated, so no financial gain can accrue from the project.  However, this is a chance to tell your story on a national stage.


Majority of retractions are due to misconduct: Study confirms opaque notices distort the scientific record

A new study out in the Proceedings of the National Academy of Sciences (PNAS) today finds that two-thirds of retractions are because of some form of misconduct — a figure that’s higher than previously thought, thanks to unhelpful retraction notices that cause us to beat our heads against the wall here at Retraction Watch.

The study of 2,047 retractions in biomedical and life-science research articles in PubMed from 1973 until May 3, 2012 brings together three retraction researchers whose names may be familiar to Retraction Watch readers: Ferric Fang, Grant Steen, and Arturo Casadevall. Fang and Casadevall have published together, including on their Retraction Index, but this is the first paper by the trio.

The paper is — as we’ve come to expect from these three — an extremely careful analysis, the most comprehensive we’ve seen to date. Other studies have offered clues to these trends, but by looking at as many years of data as they did, and by including secondary sources on the reasons for retraction, this becomes a very important contribution to our understanding of what drives retraction.

The study is convincing evidence that we’re onto something when we say that unhelpful retraction notices distort the scientific record. We’re thrilled that the authors’ analysis of opaque retraction notices relies heavily on Retraction Watch posts, as indicated in Table S1, “Articles in which Cause of Retraction was Ascertained from Secondary Sources.” This is exactly what we’ve been hoping scholars would start doing with our individual posts — and we welcome more of these kinds of analyses.

When the authors reviewed the secondary sources available to them — news stories and Office of Research Integrity reports, in addition to Retraction Watch and others — they ended up reclassifying the cause of retraction in 158. That led the to conclude that

…only 21.3%of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%).

Compare that with Grant Steen’s findings of ten years’ worth of retractions (about a third as many as in the current paper), published early last year:

Error is more common than fraud; 73.5% of papers were retracted for error (or an undisclosed reason) whereas 26.6% of papers were retracted for fraud (table 1). The single most common reason for retraction was a scientific mistake, identified in 234 papers (31.5%). Fabrication, which includes data plagiarism, was more common than text plagiarism. Multiple reasons for retraction were cited for 67 papers (9.0%), but 134 papers (18.1%) were retracted for ambiguous reasons.

It’s now clear that the reason misconduct seemed to play a smaller role in retractions, according to previous studies, is that so many notices said nothing about why a paper was retracted. If scientific journals are as interested in correcting the literature as they’d like us to think they are, and want us to believe they’re transparent, the ones that fail to include that information need to take a lesson from those that do.

Yes, we’re looking at you, Journal of Biological Chemistry, as are the authors:

Policies regarding retraction announcements vary widely among journals, and some, such as the Journal of Biological Chemistry, routinely decline to provide any explanation for retraction. These factors have contributed to the systematic underestimation of the role of misconduct and the overestimation of the role of error in retractions (3, 4), and speak to the need for uniform standards regarding retraction notices (5).

Those standards exist, of course — here are COPE’s — but some journals don’t seem to think they’re worth following.

The fact that just one in five retractions is due to honest error suggests that researchers who say retractions should be reserved for fraud are simply reflecting common practice. There’s been an interesting debate recently about when a retraction is appropriate, and the findings may inform that, too.

The question, of course, is, how common is scientific misconduct? The simple but unsatisfying answer is that we don’t know, certainly not based on this study, because it’s only of retractions. Some of the best data we have comes from a 2009 paper in PLoS ONE by Daniele Fanelli. In it, Fanelli does his own survey, and combines findings from other surveys. He concludes:

A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86–4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once –a serious form of misconduct by any standard– and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91–19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others.

Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.

In other words, 2% of scientists admit to having committed misconduct, but almost three-quarters say their colleagues have been involved in “questionable research practices.” But those may be low figures.

As the authors of the new PNAS study point out, all we can say for sure, based on their findings, is that misconduct plays more of a role in retractions than we thought it did. But we think they make a good argument for why retractions may be the canary in a coal mine when it comes to fraud, when they write that:

…only a fraction of fraudulent articles are retracted; (ii) there are other more common sources of unreliability in the literature (41–44); (iii) misconduct risks damaging the credibility of science; and (iv) fraud may be a sign of underlying counter-productive incentives that influence scientists (45, 46). A better understanding of retracted publications can inform efforts to reduce misconduct and error in science.

The paper is part of a growing oeuvre on retractions by the authors, two of whom have testified at the National Academy of Sciences:

We have previously argued that increased retractions and ethical breaches may result, at least in part, from the incentive system of science, which is based on a winner-takes-all economics that confers disproportionate rewards to winners in the form of grants, jobs, and prizes at a time of research funding scarcity (32, 46, 47).

The authors also found that the reasons for retraction seemed to vary by geography:

Most articles retracted for fraud have originated in countries with longstanding research traditions (e.g., United States, Germany, Japan) and are particularly problematic for high-impact journals. In contrast, plagiarism and duplicate publication often arise from countries that lack a longstanding research tradition, and such infractions often are associated with lower-impact journals (Fig. 3 and Table 1).

Those findings, as the authors make clear, are based on raw data, not a statistical analysis. That’s because to do the latter, and prove that a given reason for retraction was actually more common in a given country or region, you’d need the total number of papers published in that country or region, and that would go beyond what’s available in PubMed. Fang tells Retraction Watch:

Our analysis of geographical data was performed with a simple purpose in mind.  We were interested to see whether the geographical distribution of retractions differs depending on the cause (since the raw data showing countries of origin for papers retracted for fraud, plagiarism or duplicate publication have the same denominators, the three categories can be compared with each other).  This leads us to suggest that the dynamic of retractions for each of these causes is different in space (as well as in time), and should therefore be considered as separate events that are likely to have different underlying causes.  However it would not be appropriate to compare individual countries with each other, e.g. to say that plagiarism is more common in country X than in country Y, because that would require correction for the number of publications from each country.

The data do agree in general terms with those in another recent paper by medical writers in Australia. That paper, by Serina Stretton, Karen Woolley, and colleagues, could reliably conclude that first authors from lower-income countries have more retractions for plagiarism among those retractions for misconduct, but  for similar reasons could not determine whether such authors have more retractions for plagiarism as a rate of papers overall.

What will be interesting to watch is what happens if the authors, or anyone else, repeats this kind of analysis in a year, or five years. Will journals pay attention, and write more informative notices? If so, will we see the growth in misconduct among retractions continue? Retractions are increasing at a rate such that those in a given year may represent as much as a quarter of all those papers ever withdrawn. That could mean that the trends the authors identify could become even stronger.

Some of this may echo interviews that Ivan did about the study over the past week. We’ll update this post with links to those stories as they appear: