Study retracted after finding a mistaken recoding of the data

A study found that a hospital program significantly reduced the number of hospitalizations and emergency department visits. Great. But then the researchers realized that the data was recoded incorrectly, and the program actually increased hospitalizations and emergency department visits. Not so great.

They retracted their paper:

The identified programming error was in a file used for preparation of the analytic data sets for statistical analysis and occurred while the variable referring to the study “arm” (ie, group) assignment was recoded. The purpose of the recoding was to change the randomization assignment variable format of “1, 2” to a binary format of “0, 1.” However, the assignment was made incorrectly and resulted in a reversed coding of the study groups. Even though the data analyst created and conducted some test analysis programs, they were of the type that did not show any labeling of the arm categories, only the “arm” variable in a regression, for example.

Here’s the original, now-retracted study. And here’s the revised one.

Data can be tricky and could lead to unintended consequences if you don’t handle it correctly. Be careful out there.

Tags: ,

Science formally retracts LaCour paper

Last week, graduate student Michael J. LaCour was in the news for allegedly making up data. The results were published in Science. LaCour's co-author Donald Green requested a retraction, but the paper stayed while the request was considered. Today, Science formally fulfilled the request.

The reasons for retracting the paper are as follows: (i) Survey incentives were misrepresented. To encourage participation in the survey, respondents were claimed to have been given cash payments to enroll, to refer family and friends, and to complete multiple surveys. In correspondence received from Michael J. LaCour's attorney, he confirmed that no such payments were made. (ii) The statement on sponsorship was false. In the Report, LaCour acknowledged funding from the Williams Institute, the Ford Foundation, and the Evelyn and Walter Haas Jr. Fund. Per correspondence from LaCour's attorney, this statement was not true.

This is like a car accident I can't look away from, and it continues to get worse. Virginia Hughes for BuzzFeed reported a discrepancy in LaCour's listed funding sources, as noted in the Science retraction.

In the study's acknowledgements, LaCour states that he received funding from three organizations — the Ford Foundation, Williams Institute at UCLA, and the Evelyn and Walter Haas, Jr., Fund. But when contacted by BuzzFeed News, all three funders denied having any involvement with LaCour and his work.

Then Jesse Singal for Science of Us looked closer at LaCour's CV and it appears he made up his largest funding source.

The largest of these is a $160,000 grant in 2014 from the Jay and Rose Phillips Family Foundation of Minnesota. But Patrick J. Troska, executive director of the foundation, which is focused on projects that combat discrimination, wrote in an email to Science of Us, "The Foundation did not provide a grant of any size to Mr. LaCour for this research. We did not make a grant of $160,000 to him."

Just yesterday, Singal reported another discrepancy in LaCour's CV: A made up teaching award. When Singal asked LaCour about it, LaCour removed it from the CV, posted a new file to his site, and said he didn't know what Singal was talking about. The original CV was still cached on the UCLA server. Oof.

People have also started to examine LaCour's previous work, and it's not looking good.

Since this whole thing started, LaCour has stayed mostly quiet on the advice of his lawyer and says he will have a "definitive response" on or before May 29, 2015. That's tomorrow. And so I wait, unable to look away.

As a former graduate student, I keep trying to put myself in a similar situation. It's crazy. I want LaCour to drop down a response — a giant stack of papers, pages and pages long — raise his hands in the air, and just disprove everything. But it doesn't look like that's going to happen.

Tags: ,

Narcolepsy update!

Last year, I wrote a post about the potential link between autoimmune dysfunction and narcolepsy. Today, a major study published in Science Translational Medicine linking narcolepsy and autoimmunity targeted at hypocretin expressing neurons has been retracted. Ed Yong wrote about the original study when it was released and posted this update on his blog at National Geographic.

Sometimes, even things in big journals (especially big journals?) turn out to be not quite true.


Filed under: Curiosities of Nature, Follies of the Human Condition Tagged: autoimmune disease, narcolepsy, Neuroscience, retraction

Retraction action, what’s your faction: the dangers of citation worship

If you ask scientists to list words they are most afraid to hear associated with their work, I suspect “retraction” would rank high on the list. Retraction is a kind of death sentence, applied only when papers contain serious methodological errors or were tainted by fraud.

So the recent retraction of a PLoS Pathogens paper linking the virus XMRV to prostate cancer, following a new PLoS ONE paper that demonstrated that the original results were due to contamination, caught many (including the authors of the original paper, many of whom were involved in the followup study) off guard. Martin Enserink at ScienceNOW and Retraction Watch have excellent posts with details on the story.

Before offering my thoughts on this, I want to state at the outset that I have more than an passing interest in the story. I was one of the co-founders of PLoS, am a member of its Board of Directors, and continue to play an active role in its activities. I am also worked closely with the senior author on the original paper – Joe DeRisi – for three years while we were in Pat Brown’s lab at Stanford, and he remains a good friend. He is not only one of the most creative people I know, he is one of the best, and most careful, experimentalists I have ever met.

Putting aside the question of retraction for a moment, this is exactly how science is supposed to work. Several very good scientists found an intriguing and potentially important result and published a paper on it. Subsequent efforts failed to confirm their initial result. Rather than digging in their heals and defending their initial study – as many scientists do – the original authors accepted the newer results, and went to great lengths to figure out what had gone wrong. Their new paper is a model of detective work, and a cautionary tale about the challenges of working with clinical samples and viruses that everyone should read.

So it is now pretty clear that the major conclusion of the original paper – the association between XMRV and prostate cancer – is wrong. Obviously, people working in the field and anyone interested in the prostate cancer and chronic fatigue syndrome (the subject of a subsequent paper) who come upon the 2006 PLoS Biology paper need to know that subsequent studies have shown that the samples were contaminated and the conclusions are no longer accepted by authors. The question is how to do this.

Unfortunately, in the current world of scientific publishing, there aren’t a lot of ways to do this, and the editors at PLoS Pathogens chose to retract the paper. This retraction was accompanied by an editorial from PLoS Pathogens editor Kasturi Haldar and PLoS Medicine editor Ginny Barbour on the role of retractions in correcting the literature. I don’t agree with the decision to retract this paper, but it is worth understanding their logic:

There is much misunderstanding about retractions. Authors and editors have been notoriously unwilling to use them, for the perceived shame that they bring upon authors, editors, and journals. Journalists regularly note the fact that retractions are increasing and ask whether the scientific literature is thus becoming less reliable. Websites such as Retraction Watch list and dissect retractions – an extra exposure at what is already a difficult time for authors and editors. In addition there is much confusion about how to effect retractions practically. In an effort to bring some clarity to this issue in 2009 the Committee on Publication Ethics of which PLOS Pathogens is a member and one of us (VB) is currently Chair, issued guidelines on retractions, which explicitly state that retractions are appropriate when findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error.

In essence, they are trying to expand the definition of retraction away from its common usage as a way to indicate misconduct to include all cases in which the findings of a paper should now be judged unreliable. They go on to explain how they will wield this redefined tool in the future:

We firmly believe that acceleration also requires being open about correcting the literature as needed so that research can be built on a solid foundation. Hence as editors and as a publisher we encourage the publication of studies that replicate or refute work we have previously published. We work with authors (through communication with the corresponding author) to publish corrections if we find parts of articles to be inaccurate. If a paper’s major conclusions are shown to be wrong we will retract the paper. By doing so, and by being open about our motives, we hope to clarify once and for all that there is no shame in correcting the literature. Despite the best of efforts, errors occur and their timely and effective remedy should be considered the mark of responsible authors, editors and publishers.

No matter what Haldar and Barbour want, they can not erase the stigma of retraction by fiat. When a work means something in the community, it doesn’t matter what a dictionary or some unknown committee says. Retractions are viewed by scientists and the public as marks of shame. Imagine how the students and postdocs who carried out the work described in the 2006 paper. They did nothing wrong. Indeed several participated in the effort to figure out what went wrong – going above and beyond what most people would have done. And the reward for their effort is to have “RETRACTED” show up every time someone searches for them on PubMed? This is not the right solution.

I understand the instinct to want a way to correct the literature, especially in cases like this that have attracted a lot of public attention. But isn’t science ultimately all about correcting the literature? It’s not a singular act to look back at previous work and find things that could have been done better, and even things that are outright wrong. This is a large part of what we do. If you look back at the literature from five year, ten years or longer ago, you will find myriad papers that, given what we know now, have findings that are unreliable and conclusions that are now clearly wrong. Are we going to go back and retract all of these papers? Of course not. It’s insane.

As easy as it might be to dismiss this incident as an isolated example of editorial overreach, this is really just the latest manifestation 0f a broader problem that plagues scientific publication and poisons the scientific process: the reification of the citation. Going back and correcting published papers only makes sense if you view the scientific literature as an isolated collection of discrete, singular events – publications – commemorated with a sacred merk – the citation. If papers are supposed to stand forever as vessels of truth, then of course you have to purge those that are shown to be wrong – both to protect people from untruths, and to defend the sanctity of the citation.

Researchers dread retractions for the same reason they will sell their souls to publish in a high impact journals - because the currency of academic success is not achievement – it is citations. Sure, they are not unlinked. But where they come into conflict, citation almost always win. A Nature paper is a Nature paper forever – even if the results turn out to be insignificant, or, as is often the case, outright wrong. The only thing that can change that is a retraction.

Thus, in some ways, the proposal by Haldar and Barbour is not reactionary, as many have suggested – it is deeply subversive. By exposing all citations – not just those achieved dishonestly – to the threat of retraction it strips the citation of one of its most valuable properties – permanence.  But despite my love for all things subversive, I do not think this is the right solution, as it ultimately reinforces the idea of the scientific literature as a collection of discrete events.

An obvious solution to all of these problems follows from thinking about the literature as what it really is: a historical record of ideas, discoveries and, yes, mistakes – whose value comes not from static individual pieces, but from ways in which they are connected and change over time. It is often said that science is “self-correcting”, recognizing that our views of the value and validity of previously published work inevitably changes over time as we use, build on and expand upon the work of our colleagues - something perfectly demonstrated by the XMRV story. What we need to do is not to isolate and protect ourselves from the dynamic nature of science, but to embrace it.

It’s disheartening that in this day of electronic publications and databases that the editors felt that the only way they could ensure that people reading the 2006 XMRV paper would look at it in the context of newer findings was to retract the paper. If we had a way of capturing how new methods, data and ideas were changing our view of earlier work, they would not have needed to even consider something as dire or as clumsy as a retraction. And there is no reason we can’t do this – we have the technical means to switch from one-time assessments of a paper to a system of ongoing evaluation and reevaluation whose output changes as our understanding grows. The only thing stopping us is the continued reification of the citation in science, and our unwillingness to discard it.

UPDATE: I want to emphasize that my goal here was not to take the editors’ to task. I don’t completely support what they did, but they were trying to deal a real, immediate problem – people acting on conclusions from a paper whose results nobody now believes to be true. What I was primarily lamenting was the fact that our system does not provide them with any other tool than retraction.