Wakelet of UC Davis "Research Quality and Design Symposium"

Made a mini Wakelet:

The Hidden Gems of Data Accessibility Statements

  Sometimes the best part of reading a scientific paper is an unexpected moment of recognition — not in the science, but in the humanity of the scientists. It’s reassuring in a way to find

Stop Hiding Your Code

poWe, @PLOS, @PLOSONE and the open source community, will discuss why and how to #ShareYourCode in a tweet chat on 25 April, 10-11am Pacific Daylight Time/6-7pm British Summer Time. Join us! By Peter Wittek A cornerstone

Rare Disease Day Spotlight on PLOS Authors: Open Data Repositories in Practice

0000-0002-8715-2896 Science increasingly involves collaborative research groups, program partnerships and shared learnings to encourage transparency, reproducibility and a responsible transition to a more open way of doing science. Open Science policies and best practices are

Richard Harris (@rrichardh) talk at #UCDavis Oct ’17: Common errors that bedevil medical research

Made a Storify: 

Promoting reproducibility by emphasizing reporting: PLOS ONE’s approach

0000-0002-8715-2896 Promoting reproducibility by emphasizing reporting: PLOS ONE’s approach   Posted June 14, 2017 by Jenna Wilson in Editorial and Publishing Policy post-info AddThis Sharing Buttons above As we celebrate PLOS ONE’s ten year anniversary,

Reupping:Why reproducibility initiatives are misguided

I’m reposting this two-year old piece, because it’s worth reminding ourselves why exact replication has, with minor exceptions, never been an important part of science:

In my latest Pacific Standard column, I take a look at the recent hand-wringing over the reproducibility of published science. A lot of people are worried that poorly done, non-reproducible science is ending up in the peer-reviewed literature.

Many of these worries are misguided. Yes, as researchers, editors, and reviewers we should do a better job of filtering out bad statistical practices and poor experimental designs; we should also make sure that data, methods, and code are thoroughly described and freely shared. To the extent that sloppy science is causing a pervasive reproducibility problem, then we absolutely need to fix it.

But I’m worried that the recent reproducibility initiatives are going beyond merely sloppy science, and instead are imposing a standard on research that is not particularly useful and completely ahistorical. When you see a hot new result published in Nature, should you expect other experts in the field to be able reproduce it exactly?

Not always. To explain why, I’ll hand the mic over to Chris Drummond, a computer scientist and research officer at Canada’s National Research Council:

“Replicability is not Reproducibility: Nor is it Good Science” (PDF)

At various times, there have been discussions arising from the inability to replicate the experimental results published in a paper… There seems to be a widespread view that we need to do something to address this problem, as it is essential to the advancement of our field. The most compelling argument would seem to be that reproducibility of experimental results is the hallmark of science…I want to challenge this view by separating the notion of reproducibility, a generally desirable property, from replicability, its poor cousin. I claim there are important differences between the two. Reproducibility requires changes; replicability avoids them. Although reproducibility is desirable, I contend that the impoverished version, replicability, is one not worth having.

Drummond goes on to explain:

A critical point of reproducing an experimental result is that irrelevant things are intentionally not replicated. One might say, one should replicate the result not the experiment…The sharing of all the artifacts from people’s experiments is not a trivial activity.

In practice, most of us implicitly make Drummond’s distinction between replication and reproduction: we avoid exact replication when it isn’t absolutely necessary, but we are concerned about reproducing the general phenomena in our particular system.

And sometimes well-done research won’t be very reproducible, because it’s on the cutting edge, and we may not understand all of the relevant variables yet. You see this over and over in the history of science – the early days of genetics and the initial discoveries of high energy rays come to mind here. Scientists should do careful work and clearly publish their results. If another lab comes up with a different result, that’s not necessarily a sign of fraud or poor science. It’s often how science makes progress.

And here are two more pieces I wrote on the subject:

The Cancer Reproducibility Project is Incredibly Naive, Probably Useless, and Potentially Damaging

Sloppiness vs Reproducibility

 


Filed under: Curiosities of Nature Tagged: reproducibility, scientific method

PLOS 2015 Reviewer Thank You

Thank-you-image-690x3202016 is shaping up to be a notable year for PLOS; it’s the organization’s 15th Anniversary of its founding as a nonprofit and the 10th Anniversary of the groundbreaking journal PLOS ONE. Before looking too far

How reliable is resting state fMRI?

journal.pone_.0140134.g001-690x320Arguably, no advance has revolutionized neuroscience as much as the invention of functional magnetic resonance imaging (fMRI). Since its appearance in the early 1990’s, its popularity has surged; a PubMed search returns nearly 30,000 publications

When Open Access is the norm, how do scientists work together online?

The Web was invented to enable scientists to collaborate. In 2000 the Los Alamos National Laboratory commissioned me to write a progress report on web-based collaboration between scientists, Internet Groupware for Scientific Collaboration. Blogs, social media, and Open Access publishing of … Continue reading »

The post When Open Access is the norm, how do scientists work together online? appeared first on PLOS Blogs Network.