Sequencing human diploid genomes

Most eukaryotes are diploid, including humans. They have two copies of each autosome. Thousands of human genomes have been sequenced but in almost all cases the resulting genome sequence is a mixture of sequences from homologous chromosomes. If a site is heterogeneous—different alleles on each chromosome—then these are entered as variants.

It would be much better to have complete sequences of each individual chromosome (= diploid sequence) in order to better understand genetic heterogeneity in the human population. Until recently, there were only two examples in the databases. The first was Craig Venter's genome (Levey et al., 2007) and the second was an Asian male (YH) (Cao et al., 2015).

Diploid sequences are much more expensive and time-consuming than standard reference sequences. That's because you can't just match sequence reads to the human reference genome in order to obtain alignment and position information. Instead, you have to pretty much construct de novo assemblies of each chromosome. Using modern technology, it's relatively easy to generate millions of short sequence reads and then match then up to the reference genome to get a genome sequence that combines information from both chromosomes. That's why it's now possible to sequence a genome for less that $1000 (US). De novo assemblies require much more data and more computing power.

A group at a private company (10X Genomics in Pleasanton, California (USA)) has developed new software to assemble diploid genome sequences. They used the technology to add seven new diploid sequences to the databases (Weisenfeld et al., 2017). The resulting assemblies are just draft genomes with plenty of gaps but this is still a significant achievement.

Here's the abstract,
Weisenfeld, N.I., Kumar, V., Shah, P., Church, D.M., and Jaffe, D.B. (2017) Direct determination of diploid genome sequences. Genome Research, 27:757-767. [doi: 10.1101/gr.214874.116]

Determining the genome sequence of an organism is challenging, yet fundamental to understanding its biology. Over the past decade, thousands of human genomes have been sequenced, contributing deeply to biomedical research. In the vast majority of cases, these have been analyzed by aligning sequence reads to a single reference genome, biasing the resulting analyses, and in general, failing to capture sequences novel to a given genome. Some de novo assemblies have been constructed free of reference bias, but nearly all were constructed by merging homologous loci into single “consensus” sequences, generally absent from nature. These assemblies do not correctly represent the diploid biology of an individual. In exactly two cases, true diploid de novo assemblies have been made, at great expense. One was generated using Sanger sequencing, and one using thousands of clone pools. Here, we demonstrate a straightforward and low-cost method for creating true diploid de novo assemblies. We make a single library from ∼1 ng of high molecular weight DNA, using the 10x Genomics microfluidic platform to partition the genome. We applied this technique to seven human samples, generating low-cost HiSeq X data, then assembled these using a new “pushbutton” algorithm, Supernova. Each computation took 2 d on a single server. Each yielded contigs longer than 100 kb, phase blocks longer than 2.5 Mb, and scaffolds longer than 15 Mb. Our method provides a scalable capability for determining the actual diploid genome sequence in a sample, opening the door to new approaches in genomic biology and medicine.


Cao, H., Wu, H., Luo, R., Huang, S., Sun, Y., Tong, X., Xie, Y., Liu, B., Yang, H., and Zheng, H. (2015) De novo assembly of a haplotype-resolved human genome. Nature biotechnology, 33:617-622. [doi:10.1038/nbt.3200]

Levy, S., Sutton, G., Ng, P.C., Feuk, L., Halpern, A.L., Walenz, B.P., Axelrod, N., Huang, J., Kirkness, E.F., Denisov, G., Lin, Y., MacDonald, J.R., Pang, A.W. C., Shago, M., Stockwell, T.B., Tsiamouri, A., Bafna, V., Bansal, V., Kravitz, S.A., Busam, D.A., Beeson, K. Y., McIntosh, T.C., Remington, K.A., Abril, J.F., Gill, J., Borman, J., Rogers, Y.-H., Frazier, M.E., Scherer, S.W., Strausberg, R.L., and Venter, J.C. (2007) The diploid genome sequence of an individual human. PLoS Biol, 5:e254. [doi: 10.1371/journal.pbio.0050254]

What’s in Your Genome?: Chapter 4: Pervasive Transcription (revised)

I'm working (slowly) on a book called What's in Your Genome?: 90% of your genome is junk! The first chapter is an introduction to genomes and DNA [What's in Your Genome? Chapter 1: Introducing Genomes ]. Chapter 2 is an overview of the human genome. It's a summary of known functional sequences and known junk DNA [What's in Your Genome? Chapter 2: The Big Picture]. Chapter 3 defines "genes" and describes protein-coding genes and alternative splicing [What's in Your Genome? Chapter 3: What Is a Gene?].

Chapter 4 is all about pervasive transcription and genes for functional noncoding RNAs. I've finally got a respectable draft of this chapter. This is an updated summary—the first version is at: What's in Your Genome? Chapter 4: Pervasive Transcription.
Chapter 4: Pervasive Transcription

How much of the genome is transcribed?
The latest data indicates that about 90% of the human genome is transcribed if you combine all the data from all the cell types that have been analyzed. This is about the same percentage that was reported by ENCODE in their preliminary study back in 2007 and about the same percentage they reported in the 2012 papers. Most of the transcripts are present in less than one copy per cell. Most of them are only found in one or two cell types. Most of them are not conserved in other species.
How do we know about pervasive transcription?
There are several technologies that are capable of detecting all the transcripts in a cell. The most powerful is RNA-Seq, a technique that copies RNAs into cDNA then performs massive parallel sequencing ("next gen" sequencing) on all the cDNAs. The sequences are then matched back to the reference genome to see which parts of the genome were transcribed. The technique is capable of detecting concentrations of less than one transcript per cell.
Different kinds of noncoding RNAs
There are ribosomal RNAs, tRNAs, and a variety of unique RNAs like those that are part of RNAse P, signal recognition particle etc. In addition there are six main classes of other noncoding RNAS in humans: small nuclear RNAs (snRNAs); small nucleolar RNAs (snoRNAs); microRNAs (miRNAs); short interfering RNAs (siRNAs); PIWI-interacting RNAs (piRNAs); and long noncoding RNAs (lncRNAs). There are many proven examples of functional RNAs in each of the main classes but there are also large numbers of putative members that may or may not be true functional noncoding RNAs.
        Box 4-1: Long noncoding RNAs (lncRNAs)
There are more than 100,000 transcripts identified as lncRNAS. Nobody knows how many of these are actually real functional lncRNAs and how many are just spurious transcripts. The best analyses suggest that less than 20,000 meet the minimum criteria for function and probably only a fraction of these are actually functional.
Understanding transcription
It's important to understand that transcription is an inherently messy process. Regulatory proteins and RNA polymerase initiation complexes will bind to thousands of sites in the human genome that have nothing to do with transcription of nearby genes.
        Box 4-2: Revisiting the Central Dogma
Many scientists and journalist believe that the discovery of massive numbers of noncoding RNAs overthrows the Central Dogma of Molecular Biology. They are wrong.
        Box 4-3: John Mattick proves his hypothesis?
John Mattick claims that the human genome produces tens of thousands of regulatory RNAs that are responsible for fine-tuning the expression of the protein-coding genes. He was given the 2012 Chen Award by the Human Genome Organization for "proving his hypothesis over the course of 18 years." He has not proven his hypothesis.
Antisense transcription
Some transcripts are complimentary to the coding strand in protein-coding genes. This is consistent with spurious transcription to yield junk RNA but many workers have suggested functional roles for most of these antisense RNAs.
What the scientific papers don't tell you
There are hundreds of scientific papers devoted to proving that most newly-discovered noncoding RNAs have a biological function. What they don't tell you is that most of these transcripts are present in concentrations that are inconsistent with function (<1 molecule per cell). They also don't tell you that conservation is the best measure of function and these transcripts are (mostly) not conserved. More importantly, the majority of these papers don't even mention the possibility that these transcripts could be junk RNA produced by spurious transcription. That's a serious omission—it means that science writers who report on this work are unaware of the controversy.
On the origin of new genes
Some scientists are willing to concede that most transcripts are just noise but they claim this is an adaptation for future evolution. The idea here is that the presence of these transcripts makes it easier to evolve new protein-coding genes. While it's true that such genes could evolve more readily in a genome full of noise and junk, this cannot be a reason for such a sloppy genome.
How do you determine function?
The best way to determine function is to take a single transcript and show that it has a demonstrable function. If you take a genomics approach, then the best way to narrow down the list is to concentrate on those transcripts that are present in sufficient concentrations and are conserved in related species. In the absence of evidence, the null hypothesis is junk.
Biochemistry is messy
We're used to the idea that errors in DNA replication give rise to mutations and mutations drive evolution. We're less used to the idea that all other biochemical processes have much higher error rates. This is true of highly specific enzymes and it's even more true of complex processes like transcription, RNA processing (splicing), and translation. The idea that transcription errors could give rise to spurious transcripts in large genomes is perfectly consistent with everything we know about such processes. In fact, it's inevitable that spurious transcripts will be common in such genomes.
        Box 4-4: The random genome project
Sean Eddy has proposed an experiment to establish a baseline level of spurious transcripts and to demonstrate that the null hypothesis is the best explanation for the majority of transcripts. He suggests that scientists construct a synthetic chromosome of random DNA sequences and insert it into a human cell line. The next step is to perform an ENCODE project on this DNA. He predicts that the methods will detect hundreds of transcription factor binding sites and transcripts.
Change your worldview
There are two ways of looking at biochemical processes within cells. The first imagines that everything has a function and cells are as fine-tuned and functional as a Swiss watch. The second imagines that biochemical processes are just good enough to do the job and there's lots of mistakes and sloppiness. The first worldview is inconsistent with the evidence. The second worldview is consistent with the evidence. If you are one of those people who think that cells and genomes are the products of adaptive excellence then it's time to change your worldview.


Cold Spring Harbor tells us about the “dark matter” of the genome (Part I)


This is a podcast from Cold Spring Harbor [Dark Matter of the Genome, Pt. 1 (Base Pairs Episode 8)]. The authors try to convince us that most of the genome is mysterious "dark matter," not junk. The main theme is that the genome contains transposons that could play an important role in evolution and disease.

Here's a few facts.
  • A gene is a DNA sequence that's transcribed. There are about 20,000 protein-coding genes and they cover about 25% of the genome (including introns). It's false to say that genes only occupy 2% of the genome. In addition to protein-coding genes, there are about 5,000 noncoding genes that take up about 5% of the genome. Most of them have been known for decades.
  • It has been known for many decades that the human genome has no more than 30,000 genes. This fact was known by knowledgeable scientists long before the human genome sequence was published.
  • It has been known for decades that about 50% of our genome is composed of defective bits and pieces of once-active transposons. Thus, most of our genome looks like junk and behaves like junk. It is not some mysterious "dark matter." (The podcast actually say that 50% of our genome is defective transposons but they claim this is a recent discovery and it's not junk.)
  • The evidence for junk DNA comes from many different sources. It's not a mystery. It's really junk DNA. The term "junk DNA" was not created to disguise our ignorance of what's in your genome.
  • In addition to genes, there are lots of other functional regions of the genome. No knowledgeable scientists ever thought that the only functional parts of the genome were the exons of protein-coding genes.
There's much value in research on ALS but does it have to be coupled with an incorrect view of our genome? How many errors can you recognize in this podcast? Keep in mind that this is sponsored by one of the leading labs in the world.
Most of the genome is not genes, but another form of genetic information that has come to be known as the genome’s “dark matter.” In this episode, we explore how studying this unfamiliar territory could help scientists understand diseases such as ALS.


Experts meet to discuss non-coding RNAs – fail to answer the important question

The human genome is pervasively transcribed. More than 80% of the genome is complementary to transcripts that have been detected in some tissue or cell type. The important question is whether most of these transcripts have a biological function. How many genes are there that produce functional non-coding RNA?

There's a reason why this question is important. It's because we have every reason to believe that spurious transcription is common in large genomes like ours. Spurious, or accidental, transcription occurs when the transcription initiation complex binds nonspecifically to sites in the genome that are not real promoters. Spurious transcription also occurs when the initiation complex (RNA plymerase plus factors) fires in the wrong direction from real promoters. Binding and inappropriate transcription are aided by the binding of transcription factors to nonpromoter regions of the genome—a well-known feature of all DNA binding proteins [see Are most transcription factor binding sites functional?].

The controversy over the role of these transcripts has been around for many decades but it has become more important in recent years as many labs have focused on identifying transcripts. After devoting much time and effort to the task, these groups are not inclined to admit they have been looking at junk RNA. Instead, they tend to focus on trying to prove that most of the transcripts are functional.

Keep in mind that the correct default explanation is that a transcript is just spurious junk unless someone has demonstrated that it has a function. This is especially true of transcripts present at less than one copy per cell; are not conserved in other species; and have only been detected in a few types of cells. That's the majority of transcripts.

Nobody knows how many different transcripts have been detected since there's no comprehensive database that combines all of the data. I suspect there are several hundred thousand different transcripts. Human genome annotators have struggled to represent this data accurately. They have rejected or ignored most of the transcripts and focused on those that are most likely to have a biological function. Unfortunately, their criteria for functionality are weak and this leads them to include a great many putative genes in their annotated genome. For example, the latest annotation by Ensembl lists 22,521 genes for noncoding RNAs. This is slightly more than the total number of protein-coding genes (20,338) [Human assembly and gene annotation].

It's important to note two things about the work of these annotators. First, they have correctly rejected most of the transcripts. Second, they cannot provide solid evidence that most of those 22,521 transcripts are actually functional. What they really should be saying is that these are the best candidates for real genes.

The experts held a meeting recently in Heraklion, Greece (June 9-14, 2017). You would think that a major emphasis in that meeting would have been on identifying how many of these transcripts are biologically functional but that doesn't seem to have been a major theme according to the brief report published in Genome Biology [Canonical mRNA is the exception, rather than the rule].

Let's look at what the authors have to say about the important question.
Investigations into gene regulation and disease pathogenesis have been protein-centric for decades. However, in recent years there has been a profound expansion in our knowledge of the variety and complexity of eukaryotic RNA species, particularly the non-coding RNA families. Vast amounts of RNA sequencing data generated from various library preparation methods have revealed these non-coding RNA species to be unequivocally more abundant than canonical mRNA species.
This is very misleading. It's certainly true that there are far more than 20,000 transcripts but that's not controversial. What's controversial is how many of those transcripts are functional and how many genes are devoted to producing those functional transcripts.

The report on the meeting doesn't offer an opinion on that matter unless the authors are referring only to functional RNA species. I get the impression that most of the people who attend these meeting are reluctant to state unequivocally whether there's convincing evidence of function for more than 5,000 RNAs. I don't think that evidence exists. Until it does, the default scientific position is that there are far fewer genes for functional noncoding RNAs than for proteins.


The Extended Evolutionary Synthesis – papers from the Royal Society meeting

I went to London last November to attend the Royal Society meeting on New trends in evolutionary biology: biological, philosophical and social science perspectives [New Trends in Evolutionary Biology: The Program].

The meeting was a huge disappointment [Kevin Laland's new view of evolution]. It was dominated by talks that were so abstract and obtuse that it was difficult to mount any serious discussion. The one thing that was crystal clear is that almost all of the speakers had an old-fashioned view of the current status of evolutionary theory. Thus, they were for the most part arguing against a strawman version of evolutionary theory.

The Royal Society has now published the papers that were presented at the meeting [Theme issue ‘New trends in evolutionary biology: biological, philosophical and social science perspectives’ organized by Denis Noble, Nancy Cartwright, Patrick Bateson, John Dupré and Kevin Laland]. I'll list the Table of Contents below.

Most of these papers are locked behind a paywall and that's a good thing because you won't be tempted to read them. The overall quality is atrocious—the Royal Society should be embarrassed to publish them.1 The only good thing about the meeting was that I got to meet a few friends and acquaintances who were supporters of evolution. There was also a sizable contingent of Intelligent Design Creationists at the meeting and I enjoyed talking to them as well2 [see Intelligent Design Creationists reveal their top story of 2016].

  • Introduction: New trends in evolutionary biology: biological, philosophical and social science perspectives
    Patrick Bateson, Nancy Cartwright, John Dupré, Kevin Laland, Denis Noble
  • Review article: Why an extended evolutionary synthesis is necessary
    Gerd B. Müller
  • Research article: Evolutionary biology today and the call for an extended synthesis
    Douglas J. Futuyma
  • Review article: Developmental plasticity: re-conceiving the genotype
    Sonia E. Sultan
  • Discussion: Niche construction, sources of selection and trait coevolution
    Kevin Laland, John Odling-Smee, John Endler
  • Research article: Why developmental niche construction is not selective niche construction: and why it matters
    Karola Stotz
  • Review article: Biological action in Read–Write genome evolution
    James A. Shapiro
  • Review article: The evolutionary implications of epigenetic inheritance
    Eva Jablonka
  • Research article: Genetic, epigenetic and exogenetic information in development and evolution
    Paul E. Griffiths
  • Review article: Extended genomes: symbiosis and evolution
    Gregory D. D. Hurst
  • Review article: Domestication as a model system for the extended evolutionary synthesis
    Melinda A. Zeder
  • Review article: Evolution viewed from physics, physiology and medicine
    Denis Noble
  • Research article: The metaphysics of evolution
    John Dupré
  • Review article: The subject as cause and effect of evolution
    Peter Godfrey-Smith
  • Review article: Adaptability and evolution Patrick Bateson
  • Research article: The purpose of adaptation
    Andy Gardner
  • Research article: Human nature, human culture: the case of cultural evolution
    Tim Lewens
  • Review article: Human niche, human behaviour, human nature
    Agustin Fuentes
  • Review article: A second inheritance system: the extension of biology through culture
    Andrew Whiten
  • Review article: Early Homo, plasticity and the extended evolutionary synthesis
    Susan C. Antón, Christopher W. Kuzawa


1. Futuyma's paper is a notable exception.

2. That's me with Jonathan McLatchie in the photo.

Niles Eldredge explains punctuated equilibria

Lots of people misunderstand punctuated equilibria. It's a theory about small changes leading to speciation. In many cases the changes are so slight that you and I might not notice the difference. These are not leaps or saltations and there are no intermediates or missing links. The changes may be due to changes in the frequency of one or two alleles.

Punctuated equilibria are when these speciation events take place relatively quickly and are followed by much longer periods of stasis (no change). Niles Eldredge explains how the theory is derived from his studies of thousands of trilobite fossils.



Niles Eldredge explains hierarchy theory

You may not agree but you should at least know what some evolutionary biologists are thinking.



How much of the human genome is devoted to regulation?

All available evidence suggests that about 90% of our genome is junk DNA. Many scientists are reluctant to accept this evidence—some of them are even unaware of the evidence [Five Things You Should Know if You Want to Participate in the Junk DNA Debate]. Many opponents of junk DNA suffer from what I call The Deflated Ego Problem. They are reluctant to concede that humans have about the same number of genes as all other mammals and only a few more than insects.

One of the common rationalizations is to speculate that while humans may have "only" 25,000 genes they are regulated and controlled in a much more sophisticated manner than the genes in other species. It's this extra level of control that makes humans special. Such speculations have been around for almost fifty years but they have gained in popularity since publication of the human genome sequence.

In some cases, the extra level of regulation is thought to be due to abundant regulatory RNAs. This means there must be tens of thousand of extra genes expressing these regulatory RNAs. John Mattick is the most vocal proponent of this idea and he won an award from the Human Genome Organization for "proving" that his speculation is correct! [John Mattick Wins Chen Award for Distinguished Academic Achievement in Human Genetic and Genomic Research]. Knowledgeable scientists know that Mattick is probably wrong. They believe that most of those transcripts are junk RNAs produced by accidental transcription at very low levels from non-conserved sequences.

I agree with those scientists but for the sake of completeness here's what John Mattick believes about regulation.
Discoveries over the past decade portend a paradigm shift in molecular biology. Evidence suggests that RNA is not only functional as a messenger between DNA and protein but also involved in the regulation of genome organization and gene expression, which is increasingly elaborate in complex organisms. Regulatory RNA seems to operate at many levels; in particular, it plays an important part in the epigenetic processes that control differentiation and development. These discoveries suggest a central role for RNA in human evolution and ontogeny. Here, we review the emergence of the previously unsuspected world of regulatory RNA from a historical perspective.

... The emerging evidence suggests that there are more genes encoding regulatory RNAs than those encoding proteins in the human genome, and that the amount and type of gene regulation in complex organisms have been substantially misunderstood for most of the past 50 years. (Morris and Mattick, 2014)
The evidence does not support the claim that there are more than 20,000 genes for regulatory RNAs. It's more consistent with the idea that most transcripts are non-functional.

There's another speculation related to regulation. This one was promoted by ENCODE in their original 2007 preliminary study and later on in the now-famous 2012 papers. The ENCODE researchers identified thousand of putative regulatory sites in the genome and concluded ...
... even using the most conservative estimates, the fraction of bases likely to be involved in direct gene regulation, even though incomplete, is significantly higher than that ascribed to protein-coding exons (1.2%), raising the possibility that more information in the human genome may be important for gene regulation than for biochemical function.
They go on to speculate that 8.5% of the genome may be involved in regulation. Think about that for a minute. If we assume that each site covers 100 bp. then the ENCODE researchers are speculating that there might be more than 2 million regulatory sites in the human genome! That's about 100 regulatory sites for every gene!

This is absurd. There must be something wrong with the data.

It's not difficult to see the problem. The assays used by ENCODE are designed to detect transcription factor binding sites, places where histones have been modified, and sites that are sensitive to DNase I. These are all indicators of functional regulatory sites but they are also likely to be associated with non-functional sites. For example, transcription factors will bind to thousands of sites in the genome that have nothing to do with regulation [Are most transcription factor binding sites functional?].

It's very likely that spurious transcription factor binding will lead to histone modification and DNase I sensitivity due to the loosening of chromatin. What this means is that these assays don't actually detect regulatory sites or enhancers as ENCODE claims. Instead, they detect putative regulatory sites that have to be confirmed by additional experiments.

The scientific community is gradually becoming more and more skeptical of these over-interpreted genomic experiments.

The latest genomics paper on regulatory sires has just been posted on bioRχiv (Benton et al., 2017). This is a pre-publication archive site. The paper has not been peer-reviewed and accepted by a scientific journal but it's still making a splash on twitter and the rest of the internet.

Here's the abstract ...
Non-coding gene regulatory loci are essential to transcription in mammalian cells. As a result, a large variety of experimental and computational strategies have been developed to identify cis-regulatory enhancer sequences. However, in practice, most studies consider enhancer candidates identified by a single method alone. Here we assess the robustness of conclusions based on such a paradigm by comparing enhancer sets identified by different strategies. Because the field currently lacks a comprehensive gold standard, our goal was not to identify the best identification strategy, but rather to quantify the consistency of enhancer sets identified by ten representative identification strategies and to assess the robustness of conclusions based on one approach alone. We found significant dissimilarity between enhancer sets in terms of genomic characteristics, evolutionary conservation, and association with functional loci. This substantial disagreement between enhancer sets within the same biological context is sufficient to influence downstream biological interpretations, and to lead to disparate scientific conclusions about enhancer biology and disease mechanisms. Specifically, we find that different enhancer sets in the same context vary significantly in their overlap with GWAS SNPs and eQTL, and that the majority of GWAS SNPs and eQTL overlap enhancers identified by only a single identification strategy. Furthermore, we find limited evidence that enhancer candidates identified by multiple strategies are more likely to have regulatory function than enhancer candidates identified by a single method. The difficulty of consistently identifying and categorizing enhancers presents a major challenge to mapping the genetic architecture of complex disease, and to interpreting variants found in patient genomes. To facilitate evaluation of the effects of different annotation approaches on studies' conclusions, we developed a database of enhancer annotations in common biological contexts, creDB, which is designed to integrate into bioinformatics workflows. Our results highlight the inherent complexity of enhancer biology and argue that current approaches have yet to adequately account for enhancer diversity.
The authors looked at several ENCODE databases identifying sites of histone modification and DNase I sensitivity as well as sites that are transcribed. They specifically looked at databases predicting functional enhancers based on these data. What they found was very little correlation between the various databases and predictions of functionality. When they looked at independent assays using the same cell lines they found considerable variation and a surprising lack of correlation.

While this lack of correlation does not prove that the sites are non-functional, it does indicate that you shouldn't just assume that these sites identify real functional enhancers (regulatory sites). In other words, skepticism should be the appropriate stance.

But that's NOT what the authors conclude. Instead, they assume, without evidence, that every assay identifies real enhancers and what the data shows is that there's an incredible diversity of functional enhancers.
... we believe that ignoring enhancer diversity impedes research progress and replication, since, "what we talk about when we talk about enhancers" include diverse sequence elements across an incompletely understood spectrum, all of which are important for proper gene expression. [my emphasis - LAM]
I find it astonishing that the authors don't even discuss the possibility that they may be looking at spurious sites that have nothing to do with biologically functional regulation. Scientists can find all kinds of ways of rationalizing the data when they are convinced they are observing function (confirmation bias). In this case, the data tells them that many of the sites do not have all of the characteristics of actual regulatory sites. The obvious conclusion, in my opinion, is that the sites are non-functional, just as we suspect from our knowledge of basic biochemistry.

True believers, on the other hand, arrive at a different conclusion. They think this data shows increased complexity and mysterious functional roles that are "incompletely understood."

I hope reviewers of this paper will force the authors to consider spurious binding and non-functional sites. I hope they will force the authors to use "putative enhancers" throughout their paper instead of just "enhancers."


Benton, M.L., Talipineni, S.C., Kostka, D., and Capra, J.A. (2017) Genome-wide Enhancer Maps Differ Significantly in Genomic Distribution, Evolution, and Function. bioRxiv. [doi: 10.1101/176610]

Morris, K.V., and Mattick, J.S. (2014) The rise of regulatory RNA. Nature Reviews Genetics, 15:423-437. [doi: 10.1038/nrg3722]

A philosopher defends agnosticism

Paul Draper is a philosopher at Purdue University (West Lafayette, Indiana, USA). He has just (Aug. 2, 2017) posted an article on Atheism and Agnosticism on the Stanford Encyclopedia of Philosophy website.

Many philosphers use a different definition of atheism than many atheists. Philosophers tend to define atheism as the proposition that god(s) do not exist. Many atheists (I am one) define atheism as the lack of belief in god(s). The distinction is important but for now I want to discuss Draper's defense of agnosticism.

Keep in mind that Draper defines atheism as "god(s) don't exist." He argues, convincingly, that this proposition cannot be proven. He also argues that theism—the proposition that god(s) exist—can also not be proven. Therefore, the only defensible position for a philosopher like him is agnosticism.

But there's a problem ... and it's similar to the one concerning the definition of atheism. Here's one way to describe an agnostic according to Draper.
... an agnostic is a person who has entertained the proposition that there is a God but believes neither that it is true nor that it is false. Not surprisingly, then, the term “agnosticism” is often defined, both in and outside of philosophy, not as a principle or any other sort of proposition but instead as the psychological state of being an agnostic. Call this the “psychological” sense of the term. It is certainly useful to have a term to refer to people who are neither theists nor atheists, but philosophers might wish that some other term besides “agnostic” (“theological skeptic”, perhaps?) were used.
I wonder if there are any agnostics who adhere to this definition? Most people will, after considering the question, reach a conclusion about whether god(s) exist or not regardless of whether the conclusion can be rigorously defended. Most of those who choose to call themselves agnostics will have concluded that there are no god(s) and will act out their lives accordingly. They are atheists by my definition.

But this is not the definition of agnosticism that Draper prefers.
If, however, “agnosticism” is defined as a proposition, then “agnostic” must be defined in terms of “agnosticism” instead of the other way around. Specifically, “agnostic” must be defined as a person who believes that the proposition “agnosticism” is true instead of “agnosticism” being defined as the state of being an agnostic. And if the proposition in question is that neither theism nor atheism is known to be true, then the term “agnostic” can no longer serve as a label for those who are neither theists nor atheists since one can consistently believe that atheism (or theism) is true while denying that atheism (or theism) is known to be true.
I know a theist who is content to call himself an agnostic because he cannot prove the existence of his preferred god(s) even though he believes in them and acts accordingly. Similarly, there are many nonbelievers (atheists by my definition) who will accept the proposition that neither the existence nor the nonexistence of god(s) is knonw to be true for an absolute fact. Thus, you can have believers in god(s) who are agnostics and nonbelievers in god(s) who are agnostic.

This is why Dawkins refers to himself as an an agnostic atheist.

The simplest argument for this version of agnosticism is that you cannot prove a negative. Thus, although you can, in theory, prove that god(s) exist, you can never prove that they don't exist. If you define atheism as the belief that god(s) don't exist then that version of atheism is logically indefensible if you are in a philosophy class. In the real world, probabilities count so that if something is extremely improbable you can reasonably maintain that it doesn't exist. You can certainly act and behave as if it doesn't exist. We do that all the time. I'm not worried about being abducted by aliens in near-Earth orbit. See Russell's teapot.

I don't think philosophers like that argument so they look for better ways to defend agnosticism. Here's how Paul Draper does it ....
4. An Argument for Agnosticism
According to one relatively modest form of agnosticism, neither versatile theism nor its denial, global atheism, is known to be true. Robin Le Poidevin (2010: 76) argues for this position as follows:
  • (1)There is no firm basis upon which to judge that theism or atheism is intrinsically more probable than the other.
  • (2)There is no firm basis upon which to judge that the total evidence favors theism or atheism over the other.
It follows from (1) and (2) that
  • (3)There is no firm basis upon which to judge that theism or atheism is more probable than the other.
It follows from (3) that
  • (4)Agnosticism is true: neither theism nor atheism is known to be true.
In my experience, the vast majority of agnostics, including agnostic philosophers, have judged that there are no gods. Unless they are being totally irrational, this means they have reached a conclusion concerning the existence of god(s) because they don't act as if they existed. Presumably they must have a reason for reaching this conclusion even if it's only a tentative conclusion.

I assume their reasons are the same as mine—there's no believable evidence for the existence of god(s) so there's no reason to believe in them. The evidence strongly favors the proposition that god(s) don't exist.

I reject propositions #1 and #2. I think there IS firm basis for judging that god(s) don't exist. Part of that "firm basis" is because of my understanding of how the natural universe works and my understanding of the main arguments for the existence of god(s). I reject conclusion #3 because there IS firm basis for judging that god(s) don't exist.

Therefore, in my opinion, strict agnosticism of this sort is false because nonexistence of god(s) is far more probable than existence of god(s). I have to ask myself why philosophers argue this way. I think it's because they want to set up a rigorous logical proof of their propositions and conclusions. They are uncomfortable with probabilities and they aren't overly concerned about how people behave in the real world where these discussions play out.

I think my view is similar to pragmatism but, as usual, when you read what philosophers have to say about a viewpoint it becomes very confusing.