Western scientists should continue to cooperate with Chinese scientists

China has become a science powerhouse and it achieved this goal, in part, by sending its young scientitsts abroad to train in universities in Canada, Australia, United States, and Europe. Many of these countries have signed scientific cooperation agreements with China but some of those agreements are in danger of lapsing as China is increasingly seen as an untrustworthy enemy.

Read more »

On the importance of controls

When doing an exeriment, it's important to keep the number of variables to a minimum and it's important to have scientific controls. There are two types of controls. A negative control covers the possibility that you will get a signal by chance; for example, if you are testing an enzyme to see whether it degrades sugar then the negative control will be a tube with no enzyme. Some of the sugar may degrade spontaneoulsy and you need to know this. A positive control is when you deliberately add something that you know will give a positive result; for example, if you are doing a test to see if your sample contains protein then you want to add an extra sample that contains a known amount of protein to make sure all your reagents are working.

Lots of controls are more complicated than the examples I gave but the principle is important. It's true that some experiments don't appear to need the appropriate controls but that may be an illusion. The controls might still be necessary in order to properly interpret the results but they're not done because they are very difficult. This is often true of genomics experiments.

Consider the ENCODE experiments where a great effort was made to map RNA transcripts, transcription factor binding sites, and open chromatin domains. In order to interpet these results correctly, you need both positive and negative controls but the most important was the negative control. Here's how Sean Eddy describes the required control (Eddy 2013):

To clarify what noise means, I propose the Random Genome Project. Suppose we put a few million bases of entirely random synthetic DNA into a human cell, and do an ENCODE project on it. Will it be reproducibly transcribed into mRNA-like transcripts, reproducibly bound by DNA-binding proteins, and reproducibly wrapped around histones marked by specific chromatin modifications? I think yes.

... Even as a thought experiment, the Random Genome Project states a null hypothesis that has been largely absent from these discussions in genomics. It emphasizes that it is reasonable to expect reproducible biochemical activities ... in random unselected DNA.

This may be a case where creating the control isn't easy but we are reaching the stage where it may become necessary because stamp-collecting will only get you so far. Ford Doolittle has come up with a similar type of control to interpret the functional elements (FE) described by ENCODE (Doolittle, 2013):

Suppose that there had been (and probably, some day, there will be) ENCODE projects aimed at enumerating, by transcriptional and chromatin mapping, factor footprinting, and so forth, all of the FEs in the genomes of Takifugu and a lungfish, some small and large genomed amphibians (including several species of Plethodon), plants, and various protists. There are, I think, two possible general outcomes of this thought experiment, neither of which would give us clear license to abandon junk. The first outcome would be that FEs (estimated to be in the millions in our genome) turn out to be more or less constant in number, regardless of C-value—at least among similarly complex organisms. ... The second likely general outcome of my thought experiment would be that FEs as defined by ENCODE increase in number with C-value, regardless of apparent organismal complexity.

I've been thinking a lot lately about transcripts and alternative splicing. Massive numbers of RNAs are being identified in all kinds of tissues and all kinds of species now that the techniques have become routine. When multiple transcript variants from the same gene are identified they are usually interpreted as genuine examples of alternative splicing. The field needs controls. The negative control is similar to the one proposed by Sean Eddy but it's important to have a positive control, which in this case would be a well-characterized set of genes with real alternative splicing where the function of the splice variants has been demonstrated. If your RNA-Seq experiment fails to detect the known alternatively spliced genes then something is wrong with the experiment.

It's not easy to identify this set of genes; that's why I admire the effort made by a graduate student (soon to be Ph.D.) at the University of British Columbia, Shams Bhuiyan, who tried very hard to comb the literature to come up with some gold standards to serve as positive controls (Bhuiyan, 2018). His efforts were not very successful because there aren't very many of these genuine examples. This is a problem for the field of alternative splicing but most workers ignore it.

This brings me to a recent paper that caught my eye:

Uebbing, S., Gockley, J., Reilly, S.K., Kocher, A.A., Geller, E., Gandotra, N., Scharfe, C., Cotney, J. and Noonan, J.P. (2019) Massively parallel discovery of human-specific substitutions that alter neurodevelopmental enhancer activity. Proc. Natl. Acad. Sci. (USA) 118: e2007049118. [doi: 10.1073/pnas.2007049118]

Genetic changes that altered the function of gene regulatory elements have been implicated in the evolution of human traits such as the expansion of the cerebral cortex. However, identifying the particular changes that modified regulatory activity during human evolution remain challenging. Here we used massively parallel enhancer assays in neural stem cells to quantify the functional impact of >32,000 human-specific substitutions in >4,300 human accelerated regions (HARs) and human gain enhancers (HGEs), which include enhancers with novel activities in humans. We found that >30% of active HARs and HGEs exhibited differential activity between human and chimpanzee. We isolated the effects of human-specific substitutions from background genetic variation to identify the effects of genetic changes most relevant to human evolution. We found that substitutions interacted in both additive and nonadditive ways to modify enhancer function. Substitutions within HARs, which are highly constrained compared to HGEs, showed smaller effects on enhancer activity, suggesting that the impact of human-specific substitutions is buffered in enhancers with constrained ancestral functions. Our findings yield insight into how human-specific genetic changes altered enhancer function and provide a rich set of candidates for studies of regulatory evolution in humans.

This is a very complicated set of experiments using techniques that I'm not familiar with. I suspect that there are only a few hundred scientists in the entire world that can read this paper and understand exactly what was done and whether the experiments were performed correctly. I imagine that there are even fewer who can evaluate the results in the proper context.

The objective is to identify mutations in the human genome that are responsible for making us different from our ancestors, notably the common ancestor we share with chimps. The authors assume, correctly, that these differences are likely to reside in regulatory sequences. They focused on regions of the genome that have been previously identified as the sites of chromatin modifications and/or transcription factor binding sites. They then narrowed down the search by choosing only those sites that showed either accelerated changes in the human lineage (1,363 HARs) or increased enhancer activities in humans (3,027 HGEs).

All of these sites, plus their chimp counterparts, were linked to reporter genes and the constructs were assayed for their ability to drive transcription of the reporter gene in cultures of human neural stem cells. Those cells were chosen because the authors expect a lot of human-specific changes in brain cells as opposed to other tissues. (That's not a reasonable assumption and, furthermore, it looks like brain cells have a lot more spurious transcription than other cells (except for testes).)

They found that only 12% of their HARs were active in this assay and only 34% of HGEs were active. That's interesting but it doesn't tell us a lot; for example, it doesn't tell us whether any of these sites are biologically significant because we don't have the results of Sean Eddy's Random Genome Project to tell us how many of ENCODE's sites are significant. We know that some small fraction of random DNA sequences have enhancer activity and we know that this fraction increases when you select for stretches of DNA that are known to bind transcription factors. What that means is that many of these sites are not real regulatory sequences but we don't know which ones are real and which ones are spurious.

Next, they focused on those sites that showed differential expression of the reporter genes when you compared the chimp and human versions. About 3% of all HARs and 12% of all HGEs fell into this category. Then they looked at the specific nucleotide differences to see if they were responsible for the differential expression and they found some examples, but most of them were modest changes (less than 2-fold). Here's the conclusion:

We identified 424 HARs and HGEs with human-specific changes in enhancer activity in human neural stem cells, as well as individual sequence changes that contribute to those regulatory innovations. These findings now enable detailed experimental analyses of candidate loci underlying the evolution of the human cortex, including in humanized cellular models and humanized mice. Comprehensive studies of the HARs and HGEs we have uncovered here, both individually and in combination, will provide novel and fundamental insights into uniquely human features of the brain.

This is a typical ENCODE-type conclusion. It leaves all the hard work to others. But here's the rub. How many labs are willing to take one of those 424 candidates and devote money, graduate students, and post-docs, to finding out whether they are really regulatory sites? I bet there are very few because, like the rest of us, they are so skeptical of the result that they are unwilling to risk their careers on it.

The experiments conducted by Uebbing et al. lack proper controls. There are times when simple data collection experiments are justified and there are times when additional genomics survey experiments are useful but as we enter 2021 we need to recognize that those times are behind us. The time has come to sort the wheat from the chaff and that means calling a halt to publishing experiments that can't be meaningfully interpreted.


Image Credit: The control flowchart is from ErrantScience.com.

Bhuiyan, S.A., Ly, S., Phan, M., Huntington, B., Hogan, E., Liu, C.C., Liu, J. and Pavlidis, P. (2018) Systematic evaluation of isoform function in literature reports of alternative splicing. BMC Genomics 19: 637. [doi: 10.1186/s12864-018-5013-2]

Doolittle, W.F. (2013) Is junk DNA bunk? A critique of ENCODE. Proc. Natl. Acad. Sci. (USA) 110: 5294-5300. [doi: 10.1073/pnas.1221376110]

Eddy, S.R. (2013) The ENCODE project: missteps overshadowing a success. Current Biology 23: R259-R261. [10.1016/j.cub.2013.03.023]

How to make an impact in science policy as a graduate student

0000-0002-8715-2896 With a lack of scientists in the Executive Branch and a growing national sentiment to include science in policymaking, now is better than ever to get involved with science policy as an early career

Opinion: We’re at War for Science Literacy, Not Against Faith

0000-0002-8715-28960000-0001-8462-0271   On January 18, 2018, House Bill 258 was introduced to the Alabama House of Representatives. As reported by the National Center for Science Education, if enacted, this bill would allow teachers to present

The seven biggest problems in science

Here's an interesting article about the biggest problems in (American) science: The 7 biggest problems facing science, according to 270 scientists. Most of them apply to science in other countries.

I've added brief comments under six of the headings. Those are MY opinions, not necessarily those of the authors. The comment under #6 is a direct quote from the article.
  1. Academia has a huge money problem.
    There's not enough money to do high quality science, especially basic science.
  2. Too many studies are poorly designed. Blame bad incentives.
    Some experiments are poorly designed. All scientists are under pressure to make their results seem important.
  3. Replicating results is crucial. But scientists rarely do it.
    Replication is important—especially in medical studies—but I think this problem is exaggerated.
  4. Peer review is broken.
    The system (peer review) isn't working well. That doesn't mean there's a better system.
  5. Too much science is locked behind paywalls.
    This was never a problem in the past when you had to go to the library to read science journals. You could photocopy whatever you wanted. Now it's a problem because we want instant access from our laptops.
  6. Science is poorly communicated to the public.
    "But not everyone blamed the media and publicists alone. Other respondents pointed out that scientists themselves often oversell their work, even if it's preliminary, because funding is competitive and everyone wants to portray their work as big and important and game-changing.

    'You have this toxic dynamic where journalists and scientists enable each other in a way that massively inflates the certainty and generality of how scientific findings are communicated and the promises that are made to the public,' writes Daniel Molden, an associate professor of psychology at Northwestern University. 'When these findings prove to be less certain and the promises are not realized, this just further erodes the respect that scientists get and further fuels scientists desire for appreciation.'
    "
  7. Life as a young academic is incredibly stressful.
    This is not just a problem for my younger colleagues. It affects all of us. It affects morale in an academic department and it affects the way science is done.

Here’s why Alain Beaudet, President of the Canadian Institutes of Health Research, should resign

The Canadian Institutes of Health Research (CIHR) is the main source of research funding for Canadian health researchers, including those doing basic research like most of the researchers in my biochemistry department.

A few years ago, CIHR decided to revamp the process of applying for and obtaining research grants. They did this without taking into consideration the wishes of most applicants. (They did "consult," but consulting isn't the same as listening.)

The result has been a disaster. Most researchers are confused and discouraged by the new process and there's great fear that the results of the next competitions will be harmful to basic research and harmful to new investigators.

But even before the new rules came into play the funding of basic, curiosity-motivated, science was taking a major hit. Many mid-career basic researchers at the University of Toronto have lost their grants or are struggling to make do with a lot less money. This is partly due to a lack of money in the system but it's been exacerbated by a deliberate shift in priorities under the previous Conservative government of former Prime Minister Stephen Harper.

These are some of the reasons why Canadian researchers have been calling for Alain Beuadet to resign [Support basic research with new leaders at the Canadian Institutes of Health Research (CIHR)].

In light of the controversy surrounding CIHR and the grant process, you would think that the President would take responsibility for the mess and quit. You would think, at least, that in the annual report there would be some mention of the problems and how they are going to be fixed. Let's look at the President's Message.
Q: What were CIHR’s biggest accomplishments or milestones of 2014-15?

... After years of preparation and work, we launched the first Foundation grants competition. This first pilot was a huge challenge for CIHR. It was, for researchers, a new way of writing a grant; for evaluators, a new way of reviewing a grant; and for CIHR, a new way of administering the grant delivery process. At the same time, we were holding the last competition of our traditional open program. So, it was a bit like changing the motor of a plane while in flight!
No mention of the fact that attempting to do something as stupid as changing the motor of an airplane in flight had the predictable outcome. The plane crashed and burned!

But here's the part that upsets me more than the grant application fiasco.
Q: We are seeing a shift toward more collaboration and partnerships in health research – why is this happening?

Research is changing. Nobody is doing their own research in isolation anymore. We have discovered that innovation flourishes when we bring people from different disciplines together. Put together a mathematician, a physicist, and a biologist in a room and great things will happen.
We are scientists. Scientists base their decisions on scientific evidence not on speculation and wishful thinking. There's no evidence that "innovation" (whatever that is) is stimulated by forcing scientists from disparate disciplines to work together. In fact, I suspect this is counter-productive. If the collaborations don't form naturally then rigging the system to make this happen is probably going to produce less, not more, knowledge.

If Alain Beaudet were correct, then there should be dozens of biologists at the Large Haldron Collider in Geneva, and dozens of biochemists helping geologists with their field work. If he were correct, then one of the largest such groups, the ENCODE Consortium, should have been churning out new knowledge but, instead, their publications have impeded our understanding of the human genome because the bioinformatics experts don't understand biology.
We are also moving toward research that is more and more focused on problems rather than focused on a discipline. We used to do research in physiology or in anatomy or in biochemistry. Today, we are doing research on preventing lung diseases, or treating chronic heart disease. Researchers are now thinking of the impact of their research from the get-go.
This is a problem ... not a feature. Yes, it's true that many of my colleagues are thinking about "impact" far more than they used to. That's because their basic research questions are not going to be funded under the new rules. They have no choice.

I don't think this is wise but the leaders at CIHR just went ahead under the assumption that destroying basic curiosity-motivated research is a good thing.

The problem isn't so much whether Alain Beaudet is right but whether, as scientists, we should be making decisions without considering all the evidence and all the implications. Most of us have very little confidence in the CIHR leadership. We don't think their decisions were informed. We can respect informed decisions that disagree with our own views but only if they are based on evidence and sound reasoning.

None of the recent decisions by CIHR deserve respect. The leaders don't deserve respect.
It is important to encourage partnerships at all levels and this applies to international partnerships as well. When we tap the talent of two countries instead of one, we have a better selection of brains to start with and it is always better to have more brains! Working with another country can offer new ideas and a different cultural approach, which is very important for creativity and innovation.
There's no evidence that collaborators from different countries are more creative than collaborators from within the same country. This is just silly rhetoric. It's the sort of thing a politician might say but not a scientist.
Q: Looking to the future, what change would you like to see in the realm of health research funding?

We must allow freedom for creativity, and this is what we are doing with our new approach to funding investigator-initiated research. We should increase that freedom and take more risks. Traditionally, I think we have been a bit like an old investor: very prudent. We invest in “blue chip stocks” but we do not invest in the daring little tech company… a company that might fail. However, if that company does not fail – if it succeeds, we are going to see a huge return on investment.
What does this mean? Taken at face value, it might mean more support for basic, curiosity-motivated research on the grounds that there might be a big payoff of knowledge in the future. But that's not CIHR policy. The current policy is to allow "freedom for creativity" by shoehorning basic researchers into groups working on medically relevant problems and making them re-write their grant applications to focus on the problems that CIHR has decided are worthy of funding.1

It's one thing to create a policy that I don't like, but to pretend that it's something else is worse. I could respect a President who believed in something and stood up and defended it. I could respect a President who admits that things didn't work out as planned and promised to do better. I could respect a President who listens to other points of view and understands them—even if he disagrees.

Alain Beaudet is not that kind of CIHR President. He should resign and make way for a President who will listen to the research community.


1. Universities have been co-opted into supporting this scheme [In defense of curiosity-motivated research ].

Science and skepticism

The National Academies of Sciences (USA) formed a committee to look into scientific integrity. A summary of the report was published in the June 26th issue of Science (Alberts et al., 2015)

I'd like to highlight two paragraphs of that report.
Like all human endeavors, science is imperfect. However, as Robert Merton noted more than half a century ago "the activities of scientists are subject to rigorous policing, to a degree perhaps unparalleled in any other field of activity." As a result, as Popper argued, "science is one of the very few human activities—perhaps the only one—in which errors are systematically criticized and fairly often, in time, corrected." Instances in which scientists detect and address flaws in work constitute evidence of success, not failure, because they demonstrate the underlying protective mechanisms of science at work.
All scientists know this, but some of us still get upset when other scientists correct our mistakes. We have learned to deal with such criticism—and dish it out ourselves—because we know that's how knowledge advances. Our standards are high.

The general public doesn't get this. They think that everything that is published in the scientific literature must be correct or it wouldn't have passed peer review. They don't realize that most work has to be repeated and scrutinized before it is accepted by the scientific community. They don't understand that skepticism is an integral and important part of science.

When the scientific process of criticism and controversy is on full display in the public forum, the general public sees this as a weakness and it affects their confidence in science. Scientists, on the other hand, see this as evidence that the process is working as it should. Some groups (e.g. creationists) exploit the proper workings of science to try and convince their followers that debates among scientists mean that all of science is wrong.

Scientist have to recognize that legitimate debate and discussion is a good thing but they also have to take steps to avoid creating controversy when it isn't necessary. The ENCODE publicity fiasco is a good example. The ENCODE Consortium created a controversy by claiming that 80% of the human genome was functional. They should have known that this extreme statement would be challenged and they should have made sure that they represented the evidence against their claim. Instead, what they did was ignore that contrary evidence and not cite any of the scientific literature that would have weakened their case. That was bad science, even though we all agree that the Consortium members are entitled to express an opinion (even if they are wrong). They are not entitled to abandon skepticism and present only one side of a controversial issue. That's not what scientific integrity is about.

The NAS committee was mainly concerned with fraud and with papers containing results that are not reproducible. However, some of their advice relates to papers that are not fraudulent and the experimental results are valid.
Universities should insist that their faculties and students are schooled in the ethics of research, their publications feature neither honorific nor ghost authors, their public information offices avoid hype in publicizing findings, and suspect research is promptly and thoroughly investigated. All researchers need to realize that the best scientific practice is produced when, like Darwin, they persistently search for flaws in their arguments. Because inherent variability in biological systems makes it possible for researchers to explore different sets of conditions until the expected (and rewarded) result is obtained, the need for vigilant self-critique may be especially great in research with direct application to human disease. [my emphasis LAM]
It's all about critical thinking—something that seems to be in short supply these days.


Alberts, B., Cicerone, R.J., Fienberg, S.F., Kamb, A., McNutt, M., Nerem, R.M., Schekman, R., Shiffrin, R., Stodden, V., Suresh, S., Zuber, M.T., Pope, B.K. and Jamieson, K.H. (2015) Self-correction in science at work. Science 348: 1420-1422. [PDF]

National Academy to Congress: For economic benefits, support basic research

It’s a sobering exercise to go through your day and identify those common, essential things that exist only thanks to fundamental scientific discoveries made in the last 100 years. Of course some of our technology was developed in the Edisonian style, invented without any recourse to a understanding of the underlying science. But so much of the technology of modern life would not be possible without major basic science discoveries made during the 20th century. How we eat, communicate, travel, work and care for our health are all closely tied up with fundamental discoveries made in the past century. In other words, basic science has made a huge contribution to society’s economic growth and well-being.

That basic science generates huge material benefits has been the major justification for federally-funded research since Vannevar Bush’s 1945 manifesto. Unlike, say, the National Endowment for the Arts, which exists mainly to support a vibrant culture, federal science funding is specifically intended to generate tangible economic benefits for society — not simply to support science for its own sake.

And so it’s not surprising that Congress wants to know how well our federal investment in research is paying off. As part of the 2011 America COMPETES act, Congress required the National Academy of Sciences to look into the question of how we should evaluate the economic benefits of federally funded science. The National Academy duly convened a committee, and that committee has produced a report called Furthering America’s Research Enterprise. As I write in my latest column for Pacific Standard, the report is a robust defense of the value of basic research. If you want get the most out of our federal research dollars, the committee argues, then don’t focus directly on economic returns; focus on ensuring that we have a healthy, balanced, and world-class basic research enterprise.

There are a few areas the report doesn’t really cover, like R&D tax incentives, patent law, and federal policy towards industry R&D in general. This report is mostly about money spent on research agencies like the NSF, NIH, NASA, and the DOE. They do argue that the government should more actively support proof-of-concept projects, those risky bridging points between a fundamental discovery and the realization of a technology’s commercial potential.

But mostly it’s about basic research.The U.S. “lacks an institutionalized capability for systematically evaluating the nation’s research enterprise as a whole, assessing its performance, and developing policy options for federally funded research,” and so if Congress is interested in measuring our research performance, it should focus on building that capacity. I’m not optimistic on that point.


Filed under: Follies of the Human Condition Tagged: Science Policy

Get Science Right (in Canada)

The Canadian Association of University Teachers (CAUT) has launched a campaign to alert the public about changes in science policy and funding. The Conservative government of Stephen Harper has shifted funds toward directed research and starved Canadian scientists who focus on basic, curiosity motivated, research.

What this means is that young scientists are finding it increasingly difficult to get funding from the government. It means that scientists in mid-career are losing their grants and this means that research technicians have to be fired, graduate students can't be funded, and post-docs have to find another position.

Why is this important? Why should you care? Those are the questions that CAUT wants to answer by sponsoring meetings across the nation to explain why it's important to "Get Science Right." Come to a Town Hall meeting at the University of Toronto (Toronto, Ontario, Canada) and learn more. The meeting starts at 7pm. It's in room 119 at Emmanuel College (Victoria Uiniversity). [Facebook: Get Science Right - Toronto Town Hall]

Let me know if you plan to attend. We could get together before or after the meeting.



Better stem cell tech, more controversy

Over at Pacific Standard, I offer brief layman’s guide to the latest pluripotent stem cell technologies, and I argue that better stem cell technology will not make the ethical controversy go away. (In last week’s Nature, Martin Pera & Alan Trounson make a similar point.)

To understand where I’m coming from, let’s take a step back a few years, in the aftermath of the Bush Administration’s controversial decision to limit NIH-funded research on human embryonic stem cell (ESC) lines. Back then, much of the debate was over the merits and ethics of ESC’s versus lineage-restricted adult stem cells: ESCs were for the most part derived from leftover IVF embryos or aborted fetuses (and thus didn’t carry the genome of a patient). Adult stem cells could be taken from patients, but were much more restricted in their potential applications. Dolly the sheep was old news at that point, but the technology to create Dolly (and thus also create embryonic stem cells with the genome of a living adult) did not actually work with human cells.

The playing field has changed dramatically – we can now make ESCs that have custom genomes, by creating cloned embryos or by making iPS cells. ‘Adult’ stem cells, while still useful in some ways, are going to remain niche players for both research and patient treatment.

And so today the issue is not whether we should collect ESCs from aborted fetuses or perfectly viable but unwanted IVF embryos. Human embryos created by cloning (at this point) are extremely unlikely to have the capability to develop into live, much less healthy offspring, and so we have almost uniform agreement that reproductive cloning would be bad.

But we clearly now have the capability to mess around at life’s earliest stages, by creating custom human embryos that could be used to create important cell culture models for disease, or to treat patients with stem cells that have had their genomes repaired. Scientists need to be out there explaining the new parameters of the debate, and making the case that these powerful new technologies can bring great benefits that were out of technology’s reach just a few years ago.