Could ‘write once/read many’ discourage cheating?

TJ O’Neil

In a recent Science editorial, Barbara Redman and our Ivan Oransky called for a boost to the budget and authority of the U.S. Office of Research Integrity (ORI). In this letter, a nephrologist and researcher suggests one potential way to fight fraud.

Bravo on your editorial, which pointed out the pathetic funding level for an agency that is supposed to put a check on self-interested fabrication and distortion in scientific research.  Perhaps universities and influential individuals who feel the threat of censure have collaborated to minimize that risk by throttling the Office of Research Integrity (ORI).  Regardless, billions of dollars each year are probably lost in misdirected efforts based on false information. That is a national tragedy.

During the time I was an undergraduate at Caltech we had an honor code that was very clear: You cheat, lie or fabricate and you are at best heavily censured, and likely out.  We learned that one’s research notes were our reputation, and that our supervising senior researchers would often and unpredictably ask to review them.  It was daunting and occasionally very stressful, but led to a lifelong ethic that stood me in good stead when I went into medicine, where peoples’ lives were at stake based on what we wrote and did.  

One tool I used while working for the Veterans Administration (VA) could come in useful in the wider context of research. The VA has an electronic health record (EHR) called “CPRS” that is based on “write once/read many.”  Every note and every prescription is encrypted and stored in a way that cannot be erased.  You can enter a note amending an erroneous entry, but the original entry can never be erased.  It uses a lot of memory but assures that if an audit is needed for any reason the original entry will be available for review.  Perhaps having lab notes based on such a digital system would raise the bar to post-hoc alteration high enough to discourage at least that form of cheating.  

Of course, where prestige, money and fame are concerned there is no way to totally stop cheating. But an un-erasable digital form of lab notebook might help to at least reduce fabrication and manipulation in a world where lying seems to have become an accepted form of behavior for altogether too many in science.

Following graduation from Caltech, TJ O’Neil spent 45 years in the Air Force and VA as a clinical nephrologist after completing a fellowship in the lab of Jay Stein at the University of Texas, San Antonio.  He is currently developing a patient kidney disease education program and a dialysis safety device.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, subscribe to our free daily digest or paid weekly updatefollow us on Twitter, like us on Facebook, or add us to your RSS reader. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

What analyzing 30 years of US federal research misconduct sanctions revealed

A U.S. federal agency that oversees research misconduct investigations and issues sanctions appears to be doling out punishments fairly, according to researchers who analyzed summaries of the agency’s cases from the last three decades. 

But the authors of the study also found more than 30 papers the ORI said should be retracted have yet to be.

The researchers looked for associations between the severity of penalties the Office of Research Integrity (ORI) imposed on scientists it found responsible for research misconduct and their race and ethnicity, gender, academic rank, and other qualities. The researchers published their findings in late November in Accountability in Research, as the agency is in the process of revising its key regulations

According to the new analysis, ORI’s sanctions correlated with factors indicating the seriousness of the misconduct, such as being required to retract or correct publications, but not with demographics. 

“We did not find evidence of bias,” Ferric Fang, a professor at the University of Washington School of Medicine and one of the study’s authors, said. 

Fang, also member of the board of directors of The Center For Scientific Integrity, Retraction Watch’s parent nonprofit organization, told us: 

The ORI states that the severity of administrative actions should be based on the circumstances and severity of misconduct as well as the presence of aggravating or mitigating factors, such as repeated behavior, the impact of the misconduct on the research record and public health, retaliatory behavior, and whether the individual committing misconduct was directly responsible and accepted responsibility for their actions. Our findings suggest that the ORI has been consistent in applying these criteria.  

“In light of ongoing concerns about disparities in research funding and the STEM workforce, our findings should provide some reassurance that the ORI is applying its administrative actions in an even-handed manner,” Fang said. 

Specifically, the researchers found “factors related to the severity of the misconduct or aggravating factors, such as whether the person interfered with the investigation, had violated a prior agreement with ORI, or was required to retract papers, were positively associated with the severity of the administration action,” said David Resnik, the study’s corresponding author and a bioethicist with the National Institutes of Environmental Health Sciences. He further summarized the group’s findings: 

Whether the person had committed plagiarism only (viewed by many as less serious than data fabrication/falsification) and whether the person admitted wrongdoing (a mitigating factor under most systems of punishment) were negatively associated with the severity of the administrative action.  Factors unrelated to the severity of the misconduct or aggravating or mitigating factors, such as the person’s race/ethnicity, gender, academic position, education, or institutional affiliation were not associated with the severity of administrative actions.   

The researchers found a three year period of research supervision or funding ban was the most common sanction ORI levied, accounting for 65% of cases. ORI has occasionally banned researchers from federal funding for life, and more recently issued a 10-year funding ban and a pair of seven-year penalties.

“I suppose that this is the default length of time,” Fang said of the three-year sanction, “and may represent a penalty that imposes a true hardship on a researcher without necessarily being career-ending.” 

We asked how the findings might be relevant to ORI’s current proposals for revising its regulations. Resnik, a Federal employee, declined to speculate. Fang said: 

I am aware that some of the recently proposed updates to ORI policies have been controversial, but I don’t believe that the administrative actions examined in our study will be substantially changed.

The researchers also found that 32 papers ORI said should be retracted as part of its sanctions have not been pulled from the literature. 

“We should be concerned” about this finding, Resnik said. “It means that the literature has not been properly corrected and that scientists may be unwittingly relying on fraudulent research.  Moreover, these are people who were caught committing misconduct, were sanctioned by their institutions and ORI, and were required to correct or retract papers.” 

Fang said: 

The fact that some papers that the ORI asks to be retracted are not necessarily retracted reflects that fact that only journals can retract publications, and unfortunately, not all appear to take this responsibility seriously. 

Like Retraction Watch? You can make a tax-deductible contribution to support our work, subscribe to our free daily digest or paid weekly update, follow us on Twitter, like us on Facebook, or add us to your RSS reader. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

Guest post: Why I commented on the proposed changes to U.S. federal research-misconduct policies – and why you should, too

Retraction Watch readers may know that the U.S. Office of Research Integrity, which has oversight of misconduct investigations of work funded by the National Institutes of Health, has proposed changes to its regulations. It’s the first such proposal since 2005, and has generated discussion in various quarters. We’re pleased to present this guest post by James Kennedy, a longtime observer of these issues.

One of the most controversial points about the federal policies for research misconduct is the extent to which a laboratory director, principal investigator, or lead author is held responsible for misconduct by others on their research team. 

It is surprisingly common in cases of extensive research fraud that the person who committed the offense cannot be identified. Data management in such cases is usually uncontrolled, with no tracking of changes to the data or preservation of the original data. The principal investigator is often at the center of the pattern of misconduct, but should they also be held accountable for it when the only provable fact is that they allowed a work environment that was vulnerable to bad behavior?

The regulations for handling misconduct in research funded by the U.S. Public Health Service are currently being modified, and the Office of Research Integrity (ORI), which implements the rules, is asking for public comment. This opportunity to influence the handling of misconduct is all the more important given that the regulations are often used as a model for misconduct policies at universities and research institutions. The comment period was recently extended until Jan. 4, 2024.  

Are you responsible for misconduct by others?

As a scientist who once helped expose fraud and has experience developing regulations, I have already submitted detailed comments on the proposed regulatory changes. My perspective is that the current legal precedent regarding the responsibility of PIs in misconduct cases is problematic. This precedent, which was established in 2018 in a case involving the ORI, holds that principal investigators and lead authors on papers will be held accountable for any fraudulent results they report. Their accountability extends to data provided by collaborators that were not under the management of the principal investigator or lead author. 

The judge apparently expects researchers to delve deeply into the possibility of fraud at all collaborating laboratories, without realizing that such an effort can take months and would make scientific collaboration impractical.

This assignment of responsibility for principal investigators and lead authors is established precedent for ORI but is not addressed in the proposed regulations. I believe it should be, and the new rules should be more realistic than the current precedent. In my view, laboratory directors and principal investigators are responsible for ensuring persons they directly manage follow good research practices, but not for collaborators who are not under their direct management. My comments to the regulator include a more complete description of my proposal (see point 2).

Another controversial topic is whether the final report of a university investigation of research misconduct should be made public for open discussion and evaluation like any other evidence and claims in science. The fact that allegations of misconduct have been made does not turn off the principles of open, transparent science. The justification and rationale for this point are described in point 4 of my comments.

Those who support, oppose, or would qualify the ideas discussed here should also consider submitting comments to ORI. 

Making regulatory comments effective

Submitting comments about proposed regulations will be more effective if the following practices are followed:

  • Multiple comments making the same point bring more attention to the point. The number of positive and negative comments will be considered. It is useful to be redundant even though a similar point has been made by others. 
  • Provide information about your credentials and experience. It is fine for a person with no direct experience in research to express an opinion. That lets an agency such as ORI know the public perception. Opinions by those who have direct experience in research are also needed. The agency needs to be able to distinguish between these two cases.
  • Providing evidence to support your view is valuable. A personal experience that pertains to the need for or effect of a regulation can significantly influence a regulatory decision. References to relevant research findings are also useful. 
  • The comments can be short and focused, or more extensive. My comments are more extensive because I have experience developing regulations and want to address the full thought process of the agency. This includes describing a problem, proposing a solution, and providing a legal rationale. Most comments focus on describing a problem, and may or may not suggest a possible solution. Presenting a legal rationale is not needed for most comments.

When regulations are proposed or adopted, they are posted in the Federal Register. In general, the preamble to the regulations in the Federal Register must be read to obtain a good understanding of the intended meaning and rationale for the regulations.

Read the proposed rules for misconduct here, see instructions for submitting comments here, and read comments posted so far here

James E. Kennedy is an honorary fellow in the psychology department of the University of Edinburgh, Scotland. He previously worked in the pharmaceutical industry and as an environmental scientist.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, subscribe to our free daily digest or paid weekly update, follow us on Twitter, like us on Facebook, or add us to your RSS reader. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

Purdue agrees to pay feds back $737,000 for grant submissions with fake data

Purdue University has reached a settlement with the federal government to pay back grant money the institution received through applications submitted with falsified data, according to the U.S. Attorney’s Office for the Northern District of Indiana. 

The settlement resolves allegations under the False Claims Act related to the case of Alice C. Chang (who also uses the name Chun-Ju Chang), a former associate professor of basic medical sciences at Purdue’s College of Veterinary Medicine in West Lafayette, In. Inside Higher Ed reported first on the settlement.

Last December, the U.S. Office of Research Integrity found Chang had faked data in two published papers and nearly 400 images across 16 grant applications. As we reported then

Two of the grant applications were funded. Chang received $688,196 from the National Cancer Institute, a division of the (NIH), from 2018-2019 for “Targeting metformin-directed stem cell fate in triple negative breast cancer.” The other grant ORI says was submitted in 2014 and funded, “Targeting cell polarity machinery to exhaust breast cancer stem cell pool,” does not show up in NIH RePorter. The rest of the grants were not approved. 

Purdue agreed to pay the federal government $737,391, which the USAO release said “includes restitution and punitive damages.” 

However, Tim Doty, a Purdue spokesperson, told us the university “did not agree to any punitive damages.” He said: 

When in mid-2018 the university received notice from the U.S. Department of Health and Human Services calling into question the authenticity of some results that Dr. Alice Chang had included in proposal submissions to federal funding agencies since 2014, Purdue University cooperated and thoroughly investigated the alleged misconduct. When Purdue’s investigation was nearing conclusion in mid-2019, Dr. Chang left the university.  

Based on its investigation, Purdue agreed that the funding was not deserved and should be returned. 

Chang has been banned from all federal contracting, including grant funding, for 10 years. 

The False Claims Act allows the government to recover up to three times the amount it was defrauded, meaning that Purdue could have had to pay much more, said Eugenie Reich, a whistleblower lawyer and former investigative science journalist. 

“I think the university has got off relatively lightly in comparison to what it could have been in recognition of it self-investigating,” Reich told Retraction Watch. “This kind of settlement should incentivize other universities to self-investigate.”

Indeed, the amounts of settlements can vary a lot. Massachusetts General Hospital got off with repaying the government less than $900,000 in grant funds in 2021, compared with nearly $10 million in an earlier case that the Brigham and Women’s Hospital, another Harvard facility, paid to settle claims based on allegations it said it brought to the government’s attention. Columbia University had to pay $9.5 million in 2016, and Duke University settled a case for $112.5 million in 2019.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, subscribe to our free daily digest or paid weekly update, follow us on Twitter, like us on Facebook, add us to your RSS reader, or . If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

Auburn PhD student faked data in grant application and published paper, feds say

A former PhD student at Auburn University in Alabama relabeled and reused images inappropriately in a grant application, published paper, and several presentations, a U.S. government watchdog has found. 

The Office of Research Integrity says Sarah Elizabeth Martin “engaged in research misconduct by intentionally or knowingly falsifying and/or fabricating experimental data and results obtained under different experimental conditions,” according to a case summary posted online. 

The published paper, “The m6A landscape of polyadenylated nuclear (PAN) RNA and its related methylome in the context of KSHV replication,” appeared online in advance of publication in RNA in June 2021. The journal retracted the article last year, with the following notice:

RNA is retracting the above-mentioned article because the authors have lost confidence in the validity of some of the data and conclusions drawn from them. This action has been agreed to by all of the authors. The authors regret any inconvenience that this has caused to the scientific community.

The paper has been cited a total of nine times, according to Clarivate’s Web of Science, seven of the citations coming after the retraction.

ORI’s findings include an extensive and detailed list of images Martin reused and relabeled to represent different experiments, including original gel images in PowerPoint presentations she provided to RNA to support her data. 

We sent an email requesting comment to Martin’s Auburn email address and did not immediately receive a response or a bounce back. Martin’s former advisor, Joanna Sztuba-Solińska, was the principal investigator on the grant application ORI said contained fake data. That grant garnered $188,451 in funding from the National Institute of Allergy and Infectious Disease. Sztuba-Solińska now works in vaccine development at Pfizer. 

Martin agreed to a three-year ban from all federal contracting, including grant funding, as well as a two-year supervision period for any federally-funded research to begin after the ban lapses. During the five-year period, Martin may not serve in any advisory or consulting role with the U.S. Public Health Service, which includes the National Institutes of Health.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in The Retraction Watch Database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

Alcohol researcher faked data in animal studies, US watchdog says

Lara Hwa

A neuroscientist who studies alcohol and stress faked data in two published studies and two grant applications submitted to the National Institutes of Health (NIH), according to a U.S. government watchdog. 

Lara S. Hwa, an assistant professor of neuroscience at Baylor University in Waco, Texas, since January 2021, “engaged in research misconduct by knowingly or recklessly falsifying and/or fabricating data, methods, results, and conclusions in animal models of alcohol use disorders,” the U.S. Office of Research Integrity (ORI) concluded in its findings

ORI found Hwa, who has not immediately responded to our request for comment, “falsified and/or fabricated experimental timelines, group conditions, sex of animal subjects, mouse strains, and behavioral response data” in the grant applications and papers. The articles were published when she was a postdoc at the Bowles Center for Alcohol Studies at the University of North Carolina School of Medicine in Chapel Hill. 

One of the grant applications in which ORI said Hwa submitted fake data, “Long-term Alcohol Drinking Alters Stress Engagement of BNST Circuit Elements,” was funded for $734,145. The other grant application was “administratively withdrawn” last December. Hwa was a principal investigator on another grant, not included in ORI’s findings, that received $84,218 from 2013-2014. 

Alcohol drinking alters stress response to predator odor via BNST kappa opioid receptor signaling in male mice,” one of the papers included in ORI’s findings, appeared in eLife in July 2020. The authors retracted it in November 2021 “based on error [sic] in methods and data reporting, which they identified following publication, that cast doubt on the conclusions.” 

The extensive retraction notice stated that five co-authors, not including Hwa, the first author, notified the senior author of an error in the published data. Then, the authors “deeply examined the results and identified several more errors” in the paper, which are detailed in the notice. 

For several figures, the authors determined the published data “did not match the raw data values,” and after re-analyzing the raw data, “there were multiple changes in statistical significance, leading to an overall change in the interpretation of the results.” 

The notice concluded: 

The changes that would be required to correct the paper are sufficiently extensive that its overall conclusions must currently be considered to be in doubt. All of the authors are in agreement that retraction is the appropriate course of action.

The other paper, “Predator odor increases avoidance and glutamatergic synaptic transmission in the prelimbic cortex via corticotropin-releasing factor receptor 1 signaling,” published in Neuropsychopharmacology in 2018, has not been flagged. As part of the settlement agreement Hwa agreed to, she must request that the paper – which has been cited 24 times, according to Clarivate’s Web of Science – be corrected or retracted. 

Hwa also agreed to four years of supervision of her research, beginning on August 18 of this year. During the four years of supervision, she also may not serve on any NIH advisory or peer review committee. 

Baylor University has not immediately responded to our request for comment on the findings.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

UCLA walks back claim that application for $50 million grant included fake data

More than a month after a federal watchdog announced that a UCLA scientist had included fake data in a grant application worth more than $50 million, the university says the application didn’t have issues, after all. In early August, the U.S. Office of Research Integrity (ORI) said that Janina Jiang faked data in eleven grant … Continue reading UCLA walks back claim that application for $50 million grant included fake data

Former NCI postdoc faked data, says federal watchdog

A former postdoc at the National Cancer Institute faked 15 figures and a movie in grant applications, presentations, a paper, and an unpublished manuscript, according to a federal watchdog. The finding from the U.S. Office of Research Integrity (ORI) comes more than a year after PLOS Biology retracted a 2016 paper and noted that the … Continue reading Former NCI postdoc faked data, says federal watchdog

‘A significant departure’: Former Kentucky researcher faked 28 figures in grant applications and papers, say Feds

A former researcher at the University of Kentucky committed misconduct in both published papers and grant applications, according to a federal watchdog. The finding from the Office of Research Integrity (ORI) comes two years after the University of Kentucky announced that it had concluded that the scientist, Stuart Jarrett, had committed misconduct on four papers … Continue reading ‘A significant departure’: Former Kentucky researcher faked 28 figures in grant applications and papers, say Feds

UCLA veteran researcher faked data in 11 grant applications, per Feds

A 10-year veteran of the University of California, Los Angeles “engaged in research misconduct by knowingly and recklessly” faking data in 11 different grant applications, according to a U.S. federal watchdog. Janina Jiang, who joined UCLA’s pathology and laboratory medicine department in 2010, faked “flow cytometry data to represent interferon-γ (IFN-γ) expression in immune cells … Continue reading UCLA veteran researcher faked data in 11 grant applications, per Feds