Why Philosophy of Biology?

Robert Lawrence Kuhn has published a series of videos on his "Closer to Truth" site. On March 4, 2024 he posted a teaser video introducing Season 23: "Why Philosophy of Biology." The video contains short clips of his interviews with philosophers of biology (see list below).

Here's the blurb covering the introduction to the new season.

How can philosophy advance biology? How can biology influence philosophy? In this first series on Philosophy of Biology, Closer to Truth explores the challenges and implications of evolution. We ask how life on earth came to be as it is, and how humans came to be as we are. We address biologically based issues, such as sex/gender, race, cognition, culture, morality, healthcare, religion, alien life, and more. When philosophy and biology meet, sparks fly as both are enriched.

Those are all interesting questions. Some of them can only be answered by philosophers but others require major input from scientists. One of the important issues for philosophy of science seems to be the confict between the philosophy of the early 20th century, which was developed with physics as the model science, and the the success of molecular biology in the latter half of the 20th century, which didn't play by the same rules. (See the short interview with Paul Griffiths, whom I greatly admire, for a succinct explanation of this problem.)

I'm very conflicted about the role of philosphy in understanding the science of biology and even more conflicted about whether philosophers can recognize good science from bad science (Richard Dawkins, Denis Noble). I'm also puzzled by the apparent reluctance of philosophers to openly challenge their colleagues who get the science wrong. Watch the video to see if my scepticism is warranted.


Evelyn Fox Keller (1936 – 2023) and junk DNA

Evelyn Fox Keller died a few days ago (Sept. 22, 2023). She was a professor of History and Philosopher of Science at the Massachusetts Institute of Technology (Boston, MA, USA). Most of the obituaries praise her for her promotion of women scientists and her critiques of science as a male-dominated discipline. More recently, she turned her attention to molecular biology and genomics and many philosophers (and others) seem to think that she made notable contributions in that area as well.

Read more »

“Has Science Killed Philosophy?”

This is a debate sponsored by the Royal Institute of Philosophy on the question "Has science killed philosophy?" It suffers from one of the main problems of the philosphy of science and that's an unreasonable focus on theoretical physics. What's interesting is that the question even arises because that suggests to me that there's some reason to suspect that philosophy might not be as important as most philosophers think.

Watch the debate and decide for yourselves whether philosophy is still useful. Frankly, I found it very boring. I didn't learn anything that I didn't know before and I didn't find the defense of philosophy compelling. The philosopher's best answer to the challenge is that their discipline has complete control over rational thinking so every time you are thinking seriously about something you are doing philosophy. Ergo, philosophy will never be killed by science.

What do you think of Eleanor Knox's description of the differences between your right hand and your left hand? Is she on to a deep metaphysical question that science can't address? Or is this an example of why scientists are skeptical of the value of philosophy?

All three panelists were asked to identify a modern philosopher who made a significant contribution to science. Alex Rosenberg immediately identified someone named Samir Okasha whom I've never heard of. Apparently, Okasha made a significant contribution to the levels of selection question in evolution. According to Alex Rosenberg, the philosophers that he listens to tell him that if anyone has settled these question it's Okasha. Perhaps he should listen to evolutionary biologists to get their view on the subject?


Philosophers argue that scientific conclusions need not be accurate, justified, or believed by their authors

A remarkable paper has just been posted to a philosophy of science preprint website. (It will be published in Synthase.) Like many papers in this field it's difficult to read and the logic is obtuse but the bottom line is that scientists don't really need to be held to the old standards that we scientists used to think are essential.

Dang, Haixin and Bright, Liam Kofi (2021) Scientific Conclusions Need Not Be Accurate, Justified, or Believed by their Authors. PhilSci Archive {PDF]

We argue that the main results of scientific papers may appropriately be published even if they are false, unjustified, and not believed to be true or justified by their author. To defend this claim we draw upon the literature studying the norms of assertion, and consider how they would apply if one attempted to hold claims made in scientific papers to their strictures, as assertions and discovery claims in scientific papers seem naturally analogous. We first use a case study of William H. Bragg’s early 20th century work in physics to demonstrate that successful science has in fact violated these norms. We then argue that features of the social epistemic arrangement of science which are necessary for its long run success require that we do not hold claims of scientific results to their standards. We end by making a suggestion about the norms that it would be appropriate to hold scientific claims to, along with an explanation of why the social epistemology of science—considered as an instance of collective inquiry—would require such apparently lax norms for claims to be put forward.

Really? I'm not going to review all the claims made in this paper but let's just look at one of the examples they give.

We will now argue that public scientific avowals should not be held to factive norms, justification norms, and belief norms.That is to say, we will argue that public avowals which violate all such norms are entirely appropriate in scientific inquiry. Our argument depends on how scientific findings are actually communicated in the scientific community, or the role such communications play. To illustrate the role of public scientific avowals in science, consider this scenario which a scientist may often find herself in:

“Zahra is a scientist working at the cutting edge of her field. Based on her research, she comes up with a new hypothesis. She diligently pursues inquiry according to the best practices of her field for many months. Her new hypothesis would be considered an important breakthrough discovery. Zahra knows that many more studies will have to be done in the future in order to confirm her hypothesis. Further, she has read the current literature and realizes that the existing research in her field does not, on net, support her hypothesis. She does not believe that she has conclusively proven the new hypothesis. Nonetheless, Zahra sends a paper reporting her hypothesis to the leading journal in her subdiscipline. In the abstract of the paper, the conclusion, and talks she gives on her work, she advocates for her hypothesis. Peer reviewers, while also sceptical of the new hypothesis, believed that her research had been carried out according to best known practices and her paper would be a valuable contribution to the field. Her paper, which purports to have advanced a new hypothesis, is published and widely read by members ofher community. In subsequent years, additional research in her field conclusively demonstrates that Zahra’s hypothesis was false.”

Public scientific avowals like Zahra’s maintaining her hypothesis do not live up to the standards set by the norms of assertion. Zahra’s avowals were false. Her avowals were not justified by the total evidence available to her, since she is acquainted with the existing research in her field which does not support her hypothesis. Furthermore, Zahra herself did not fully believe in her avowals. Nonetheless, we believe that some such avowals that fail norms we hold assertions to can still be important to the epistemic success of science. In fact, Zahra’s conduct is exactly how scientists ought to act in order to successfully communicate scientific findings. During active scientific research, public scientific avowals will often fail to meet the norms of assertion, yet scientists still need to continue to make avowals which report their findings to other members of their community.

Now, the fact that Zahra's hypothesis turned out to be false is irrelevant as long as she honestly thought that her data was accurate and her advocacy was scientifically justified. If she didn't actually believe that her new hypothesis was possibly correct then she was obliged, in my opinion, to state that she was just advocating it in order to stimulate further research. None of that violates acceptable standards of science as far as I'm concerned. (Note that I'm talking about science standards here and not epistemological standards. There may not be as much overlap as you would hope.)

It seems to me that these philosphers are nitpicking—something that philosopher do on occasion. The only interesting part of this discussion, as far as I'm concerned, is how Zahra deals with the fact that the existing evidence conflicts with her hypothesis. The proper way to deal with that is to describe these conflicts correctly and point out why they need to be reexamined in light of her new hypothesis. She may choose to ignore some evidence on the grounds that it's due to errors or bad science or she may argue that the existing data is incomplete. The key point is that she is obliged to follow Richard Feynman's rule.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can — if you know anything at all wrong, or possibly wrong — to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it.

Richard Feynman in "Surely You're Joking, Mr. Feynman": Adventures of a Curious Character.

This brings me to some of the controversies that we currently face such as whether alternative splicing is a widespread phenomenon, whether there are tens of thousands of lncRNAs, and whether most of our genome is functional. As a general rule, with very few exceptions, everyone who is publically avowing these claims is disobeying Feynman's rule and for that reason they are doing bad science that should be unacceptable to the scientific community. The fact that so far it is NOT unacceptable is no reason for philosophers, or anybody else, to justify it.

The two philosphers who wrote this article are Haixin Dang and Liam Kofi Bright of Leeds University and the London School of Economics and Political Science respectively. They close their argument with,

When stating their central claims scientists should not be held to the kind of norms we hold assertions to if collective inquiry is to flourish. At the least, properly put forward scientific public avowals frequently do not and need not satisfy those norms of assertion that have been discussed in the analytic epistemology literature. Public avowals in science ought to be governed by a different norm.

I don't even know what this means. If it means that the "norm" of ENCODE researchers is acceptable because it's now so common then I strongly reject that idea. I still think there are standards that we should strive to live up to and I will continue to call out those scientists who flout them.1

The final paragraph of the Dang and Bright paper is,

Underlying all our arguments is the conviction that a scientific research community must ensure its members must spread out across logical space. We must allow for the exploration of different theories, by different methods, and accept that there will be different positions adopted as time goes by and results accumulate. Perhaps inquiry shall prove to be a process of never ending adjustment, and this will be our state in perpetuity. Or perhaps we may eventually learn from science what is actual. But even if so, in order to get there, we must allow that in the midst of inquiry, scientific public avowals will frequently be defences of implausible possibilities.

That's just silly motherhood stuff. Nobody disagree with that. Dissent and controversy are what's so exciting about science and nobody should try to suppress it. But there are definitely rules that scientists must follow if they are going to be respected and one of those rules is that you can't ignore dissent and controversy. You must respect and deal with those who disagree with you and if you try to pretend that your opponents don't exist then you are not a real scientist because you are doing the opposite of what the motherhood statement proposes; you are suppressing dissent and controversy.


I'm pretty sure that the key words in the paragraph are "analytic epsietemology literature" and what the authors are really debating is some set of historic philosophical rules about how scientists are supposed to behave. That's why Dang and Bright are proposing some sort of new standard called "contextualist justificatory norm" that epistemologists should follow. I'm not going down that rabbit hole. I'm addressing their paper from the context of how real scientists should behave not how real epistemologists should behave. They have very little in common in today's world.

Is science the only way of knowing?

Most of us learned that science provides good answers to all sort of questions ranging from whether a certain drug is useful in treating COVID-19 to whether humans evolved from primitive apes. A more interesting question is whether there are any limitations to science or whether there are any other effective ways of knowing. The question is related to the charge of "scientism," which is often used as a pejorative term to describe those of us who think that science is the only way of knowing.

I've discussed these issue many times of this blog so I won't rehash all the arguments. Suffice to say that there are two definitions of science; the broad definition and the narrow one. The narrow definition says that science is merely the activity carried out by geologists, chemists, physicists, and biologists. Using this definition it would be silly to say that science is the only way of knowing. The broad definition can be roughly described as: science is a way of knowing that relies on evidence, logic (rationality), and healthy skepticism.

The broad definition is the one preferred by many philosophers and it goes something like this ...

Unfortunately neither "science" nor any other established term in the English language covers all the disciplines that are parts of this community of knowledge disciplines. For lack of a better term, I will call them "science(s) in the broad sense." (The German word "Wissenschaft," the closest translation of "science" into that language, has this wider meaning; that is, it includes all the academic specialties, including the humanities. So does the Latin "scientia.") Science in a broad sense seeks knowledge about nature (natural science), about ourselves (psychology and medicine), about our societies (social science and history), about our physical constructions (technological science), and about our thought construction (linguistics, literary studies, mathematics, and philosophy). (Philosophy, of course, is a science in this broad sense of the word.)

Sven Ove Hanson "Defining Pseudoscience and Science" in Philosophy of Pseudescience: Reconsidering the Demarcation Problem.

Clearly, scientific education ought to mean the implanting of a rational, sceptical, experimental habit of mind. It ought to mean acquiring a method – a method that can be used on any problem that one meets – and not simply piling up a lot of facts.

George Orwell

Using the broad definition, one can make a strong case that science is the only proven way of gaining knowledge. All other contenders are either trivial (mathematics), wrong (religion) or misguided (philosophy). So far, nobody that I know has been able to make a convincing case for any non-scientific way of knowing. Thus, I adopt as my working hypothesis the view that science is the only way of knowing.

Last year, Jerry Coyne revived the debate by posting an article about our favorite philosopher Maarten Boudry.1 Boudry also adopts the broad definition of science and agrees that there are no other ways of knowing [Scientism schmientism! Why there are no other ways of knowing apart from science (broadly construed)]. As I mentioned above, the debate is related to the charge of "scientism," which is often levelled against people like Boudry and Coyne (and me).

The debate over science as a way of knowing hasn't been settled. There are still lots of philosphers fighting a rearguard action to save philosophy and the humanities from the science invasion. Boudry and Massimo Pigliucci have put together a series of papers on the debate and it's a must-read for anyone who participates in this war. One of the defenders of philosophy in this book is Stephen Law, who is active on Facebook so you can engage in the debate there.

Stephen claims that there are two kinds of questions to which science cannot supply answers: moral questions and philosophical questions. Neither of those make any sense to me. Moral questions are essentially questions about the best way for societies to behave and the answers to those questions clearly depend on evidence and on observations about existing societies. As for philosophical questions, Law describes them like this,

On my view, philosophical questions are, for the most part, conceptual rather than scientific or empirical, and the methods of philosophy are, broadly speaking, conceptual rather than scientific or empirical.

Stephen Law recognizes the distinction between "questions" and "knowledge" and, while he defends philosophy as "valuable exercise," he admits that pure reason alone can't reveal reality.

So perhaps, there's at least this much right about scientism: armchair philosophical reflection alone can't reveal anything about reality outside of our own minds. However, as I say, that doesn't mean such methods are without value.

If you've read this far, then good for you! Read the ongoing debate between Jerry Coyne and Adam Gopnik [Are The Methods Used By Science The Only Ways Of Knowing?]. Now watch this lecture given by Jerry Coyne in India a few years ago to see if you can refute the idea that science is the only way of knowing.



1. That's Boudry on the right in a photo taken back in 2010 when he was just a graduate student attending a conference at the University of Toronto. He's with Stefaan Blanke. I also visited Maarten in Gent, Belgium a few years later.

Is science a social construct?

Richard Dawkins has written an essay for The Spectator in which he says,

"[Science is not] a social construct. It’s simply true. Or at least truth is real and science is the best way we have of finding it. ‘Alternative ways of knowing’ may be consoling, they may be sincere, they may be quaint, they may have a poetic or mythic beauty, but the one thing they are not is true. As well as being real, moreover, science has a crystalline, poetic beauty of its own.

The essay is not particularly provocative but it did provoke Jerry Coyne who pointed out that, "The profession of science" can be contrued as a social construct. In this sense Jerry is agreeing with his former supervisor, Richard Lewontin1 who wrote,

"Science is a social institution about which there is a great deal of misunderstanding, even among those who are part of it. We think that science is an institution, a set of methods, a set of people, a great body of knowledge that we call scientific, is somehow apart from the forces that rule our everyday lives and tha goven the structure of our society... The problems that science deals with, the ideas that it uses in investigating those problems, even the so-called scientific results that come out of scientific investigation, are all deeply influenced by predispositions that derive from the society in which we live. Scientists do not begin life as scientists after all, but as social beings immersed in a family, a state, a productive structure, and they view nature through a lens that has been molded by their social structure."

Coincidently, I just happened to be reading Science Fictions an excellent book by Stuart Ritchie who also believes that science is a social construct but he has a slighly different take on the matter.

"Science has cured diseases, mapped the brain, forcasted the climate, and split the atom; it's the best method we have of figuring out how the universe works and of bending it to our will. It is, in other words, our best way of moving towards the truth. Of course, we might never get there—a glance at history shows us hubristic it is to claim any facts as absolute or unchanging. For ratcheting our way towards better knowledge about the world, though, the methods of science is as good as it gets.

But we can't make progress withthose methods alone. It's not enough to make a solitary observation in your lab; you must also convince other scientists that you've discovered something real. This is where the social part comes. Philosophers have long discussed how important it is for scientists to show their fellow researchers how they came to their conclusions.

Dawkins, Coyne, Lewontin, and Ritchie are all right in different ways. Dawkins is talking about science as a way of knowing, although he restricts his definition of science to the natural sciences. The others are referring to the practice of science, or as Jerry Coyne puts it, the profession. It's true that the methods of science are the best way we have to get at the truth and it's true that the way of knowing is not a social construct in any meanigful sense.

Jerry Coyne is right to point out that the methods are employed by human scientists (he's also restricting the practice of science to scientists) and humans are fallible. In that sense, the enterprise of (natural) science is a social construct. Lewontin warns us that scientists have biases and prejudices and that may affect how they do science.

Ritchie makes a diffferent point by emphasizing that (natural) science is a collective endeavor and that "truth" often requires a consensus. That's the sense in which science is social. This is supposed to make science more robust, according to Ritchie, because real knowledge only emerges after carefull and skeptical scrutiny by other scientists. His book is mostly about how that process isn't working and why science is in big trouble. He's right about that.

I think it's important to distinguish between science as a way of knowing and the behavior and practice of scientists. The second one is affected by society and its flaws are well-known but the value of science as way of knowing can't be so easily dismissed.


1. The book is actually a series of lectures (The Massey Lectures) that Lewontin gave in Toronto (Ontario, Canada) in 1990. I attended those lectures.

The Function Wars Part VII: Function monism vs function pluralism

This post is mostly about a recent paper published in Studies in History and Philosophy of Biol & Biomed Sci where two philosophers present their view of the function wars. They argue that the best definition of function is a weak etiological account (monism) and pluralistic accounts that include causal role (CR) definitions are mostly invalid. Weak etiological monism is the idea that sequence conservation is the best indication of function but that doesn't necessarily imply that the trait arose by natural selection (adaptation); it could have arisen by neutral processes such as constructive neutral evolution.

The paper makes several dubious claims about ENCODE that I want to discuss but first we need a little background.

Background

The ENCODE publicity campaign created a lot of controversy in 2012 because ENCODE researchers claimed that 80% of the human genome is functional. That claim conflicted with all the evidence that had accumulated up to that point in time. Based on their definition of function, the leading ENCODE researchers announced the death of junk DNA and this position was adopted by leading science writers and leading journals such as Nature and Science.

Let's be very clear about one thing. This was a SCIENTIFIC conflict over how to interpret data and evidence. The ENCODE researchers simply ignored a ton of evidence demonstrating that most of our genome is junk. Instead, they focused on the well-known facts that much of the genome is transcribed and that the genome is full of transcription factor binding sites. Neither of these facts were new and both of them had simple explanations: (1) most of the transcripts are spurious transcripts that have nothing to do with function, and (2) random non-functional transcription factor binding sites are expected from our knowledge of DNA binding proteins. The ENCODE researchers ignored these explanations and attributed function to all transcripts and all transcription factor binding sites. That's why they announced that 80% of the genome is functional.

Here's a reminder of what ENCODE actually said in 2012 (The ENCODE Project Consortium, 2012). The lead author is Ewan Birney, the ENCODE Consortium leader/spokesperson.
The human genome encodes the blueprint of life, but the function of the vast majority of its nearly three billion bases is unknown. The Encyclopedia of DNA Elements (ENCODE) project has systematically mapped regions of transcription, transcription factor association, chromatin structure and histone modification. These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions.
As I said above, the real controversy is about the science and not about philosophical debates over the meaning of function. In 2012 it was ridiculous to dismiss all of the evidence for junk DNA and focus on transcripts and binding sites that knowledgeable scientists knew were spurious. It was ridiculous to claim that 80% of the human genome was functional. [see The truth about ENCODE and What did the ENCODE Consortium say in 2012?]

Evolution is at the heart of the controversy and that's why the ENCODE researchers were vehemently opposed by experts in molecular evolution. This strong opposition from knowledgeable experts is why the ENCODE leaders partially retracted their claim in 2014 (Kellis et al., 2014). As some scientists have pointed out, the ENCODE supporters have an adapationist (panglossian) view of evolution that's out of touch with modern views of evolution at the molecular level. Here's a nice summary in a paper by Casane et al. (2015).
In September 2012, a batch of more than 30 articles presenting the results of the ENCODE (Encyclopaedia of DNA Elements) project was released. Many of these articles appeared in Nature and Science, the two most prestigious interdisciplinary scientific journals. Since that time, hundreds of other articles dedicated to the further analyses of the Encode data have been published. The time of hundreds of scientists and hundreds of millions of dollars were not invested in vain since this project had led to an apparent paradigm shift: contrary to the classical view, 80% of the human genome is not junk DNA, but is functional. This hypothesis has been criticized by evolutionary biologists, sometimes eagerly, and detailed refutations have been published in specialized journals with impact factors far below those that published the main contribution of the Encode project to our understanding of genome architecture. In 2014, the Encode consortium released a new batch of articles that neither suggested that 80% of the genome is functional nor commented on the disappearance of their 2012 scientific breakthrough. Unfortunately, by that time many biologists had accepted the idea that 80% of the genome is functional, or at least, that this idea is a valid alternative to the long held evolutionary genetic view that it is not. In order to understand the dynamics of the genome, it is necessary to re-examine the basics of evolutionary genetics because, not only are they well established, they also will allow us to avoid the pitfall of a panglossian interpretation of Encode. Actually, the architecture of the genome and its dynamics are the product of trade-offs between various evolutionary forces, and many structural features are not related to functional properties. In other words, evolution does not produce the best of all worlds, not even the best of all possible worlds, but only one possible world.
There are many other scientists who have made the same points about ENCODE. I especially want to recommend Ford Doolittle's critique [Ford Doolittle's Critique of ENCODE].

On the meaning of the word "function"

The Sanger Institute (Cambridge, UK) was an important player in the ENCODE Consortium. It put out a press release on the day the papers were published [Google Earth of Biomedical Research]. The opening paragraph is ...

The ENCODE Project, today, announces that most of what was previously considered as 'junk DNA' in the human genome is actually functional. The ENCODE Project has found that 80 per cent of the human genome sequence is linked to biological function.

Many of us believe that 90% of our genome is junk and only 10% is functional so clearly there's a disagreement over the significance of the word "function." This debate has spawned the Function Wars.

Function warriors focus their attention on two definitions of function that have long been discussed by philosophers. The causal-role (CR) definition depends on identifying a role for a particular sequence; for example, a sequence that binds a transcription factor or a DNA sequence that's transcribed. The mere existence of an identifiable role for such a sequence is evidence of CR function. This is clearly nonsense because it ignores the fact that transcription factors can bind randomly to any DNA sequence that resembles a functional binding site and many DNA sequences are transcribed fortuitously by RNA polymerase from time to time creating a background noise of junk RNA transcripts.

The CR definition of function is pretty much useless as the only meaningful definition of function in biology, although identifying a causal role can be the first step in determining whether a sequence is actually functional. This is why Doolittle et al. (2014) recommend that scientists and philosophers stop talking about CR "function" and, instead, refer to CR "effects" (

The selected effect (SE) function is the other definition. To understand it, let's assume that all you have is sequence information and you are interested in determining how much of the genome is functional. One of the best ways of doing this is to look at which sequences are "conserved." By this I mean sequences that change more slowly than expected given the known rate of mutation.

The sequence must be conserved in closely related species and also within the population. If it's conserved then this is powerful evidence that it's under negative selection and that is the best evidence we have that a sequence currently carries out a biological function in the species.

The latest evidence on conservation in the human genome indicates that about 8% is under negative selection. This is consistent with decades of work on genetic load showing that species could not survive if more than (roughly) 10% of the genome had to be conserved.

But sequence information alone will not tell you the actual function of a particular sequence. For that you need biochemists and molecular biologists who look at the roles that sequences play in life of the cell/organism. Decades of work have demonstrated a strong correlation between biological function and conservation so that we can be extremely confident that conservation really does indicate function. This is how we know the genes, regulatory sequences, origins of replication, centimeters, and telomeres are functional parts of the genome.

The ENCODE Consortium relied exclusively on CR function to conclude that 80% of the genome is functional. They rejected the SE definition (see Stamatoyannopolis, 2012). All knowledgeable scientists now agree that ENCODE was wrong and that sequence conservation is the best evidence of function.

What did ENCODE researchers really think?

In a paper published last December (2019), Brović and Šustar claim that ENCODE used a very broad definition of function that confused actual function with evidence that a particular sequence is likely to have function. They reference Germain et al. (2014)1 as another example of philosophers who support this interpretation of the ENCODE claims.
What the proponents of ENCODE call function is, according to Germain et al. (2014), merely something that is likely to have a function. In this reading, ENCODE's biochemical function refers to activities that can be taken as evidence of potential biological functions.
This is function pluralism because it invokes both SE and CR definitions. Brović and Šustar say that there are two versions of function pluralism: methodological pluralism and theoretical pluralism. Methodological pluralism invokes both causal role data and conservation data in an effort to demonstrate true function without making a commitment as to whether CR and/or SE definitions of function are accurate. Theoretical pluralism, on the other hand, is the view that both causal-roles (CR) and conservation (SE) can prove real biological function.

The ENCODE leaders were justifiably invoking methodological pluralism in an attempt to discover function, according to Brović and Šustar. They weren't arguing that causal-roles on their own could demonstrate function.

I suppose one could look at the ENCODE leaders' "retraction" paper as support for this view (see Kellis et al. 2014) but in my opinion it is misguided—it is revisionist history in the worst sense of the word. I think the publicity campaign in 2012 showed unequivocally that ENCODE leaders really thought they had discovered true function putting to rest the idea that most of our genome is junk. They were not just advancing a hypothesis about whether 80% of the genome might possibly have a function based on their data—they actually claimed that it did have a function. They made no effort to question or correct any of the press reports saying that 80% of the human genome is functional, not junk. The fact that they backed off this claim under pressure from knowledgeable scientists doesn't mean they were misinterpreted in 2012.

"Weak etiological monism as a way out of the controversy"

The best way to resolve the controversy over how much of our genome is functional is to use scientific evidence to resolve ambiguous claims of function. Are all conserved sequences currently functional? The answer is "no" because we have examples. Are there non-conserved regions of the genome that are functional? The answer is "yes" because we have examples. The SE definition is not sufficient to resolve the controversy [see The Function Wars Part VI: The problem with selected effect function].

Are there lots of causal-role sequences that aren't functional? The answer is "yes" because we have examples. Is 90% of our genome junk? The answer is probably "yes" because that's what the cumulative data shows. Biology is messy and no strict definition of the word "function" is going to cover every possibility.

Brović and Šustar think they have come up with a philosophical way of resolving the controversy and they describe it in the last section of their paper. It covers four pages under the heading, "Weak etiological monism as a way out of the controversy." They say that etiological functions are those that define function in terms of their evolutionary history. The strong version, which they say is the standard SE version, defines function solely in terms of whether a particular trait has arisen by adaptation (whether it was selected for in the past). They quote Ford Doolittle as a proponent of strong SE monism because he said,
... the functions of a trait or feature are all and only those effects of its presence for which it was under positive natural selection in the (recent) past and for which it is under (at least) purifying selection now. (Doolittle, 2013)
The world is not inhabited exclusively by fools and when a subject arouses intense interest and debate, as this one has, something other than semantics is usually at stake.

Stephan Jay Gould (1982)
Weak etiological monism recognizes that a trait may have arisen by constructive neutral evolution but it is now maintained by natural selection (purifying selection).2 Thus, in the weak version, it is not necessary that a functional trait have arisen by positive selection (adaptation) only that it is currently conserved by purifying selection. Brović and Šustar recognize that Doolittle is probably a proponent of weak monism in spite of the definition he gave in his 2013 paper. They are correct; Doolittle has long been a proponent of constructive neutral evolution so he's quite familiar with the idea that a currently functional trait may have arisen by means other than adaptation.

I'd like to emphasize that no matter what you call it, the best evidence for function is whether a given stretch of DNA is currently being conserved by purifying selection and this definition is entirely based on scientific data. It's true that many scientists talk about SE definition as a historical definition and it's true that they often refer to selected effects as those that have arisen by adaptation (i.e. strong etiological monism). However, that's just sloppy writing because most proponents of SE function are well aware of traits that could have arisen by non-adaptive processes but are nevertheless currently under negative selection.

In summary, the essence of the Brović and Šustar paper is that ENCODE may have only been proposing a possible function for 80% of the genome and that somewhat justifies their reliance of causal-role effects. Brović and Šustar then propose that the SE definition of function should be mostly restricted to sequences currently under negative selection and not just to DNA that has arisen by adaptation; it should encompass traits that have arisen by neutral processes.

Function Wars
(My personal view of the meaning of function is described at the end of Part V.)
  1. On the Meaning of the Word "Function"
  2. The Function Wars: Part I
  3. The Function Wars: Part II
  4. The Function Wars: Part III
  5. The Function Wars: Part IV
  6. Restarting the function wars (The Function Wars Part V)
  7. The Function Wars Part VI: The problem with selected effect function

1. See The Function Wars: Part I for a discussion of the Germain et al. paper.

2. They refer to the evolution of the spliceosome as an example of constructive neutral evolution [see Constructive Neutral Evolution].

Brzović, Z., and Šustar, P. (2020) Postgenomics function monism. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 101243.[doi: 10.1016/j.shpsc.2019.101243

Casane, D., Fumey, J., and Laurenti, P. (2015) L’apophénie d’ENCODE ou Pangloss examine le génome humain. médecine/sciences, 31:680-686. [doi: 10.1051/medsci/20153106023]

Doolittle, W. F. (2013) Is junk DNA bunk? A critique of ENCODE. Proceedings of the National Academy of Sciences 110:5294-5300. [doi: 10.1073/pnas.1221376110]

Germain, P.-L., Ratti, E., and Boem, F. (2014) Junk or functional DNA? ENCODE and the function controversy. Biology & Philosophy 29:807-821. ]doi: 10.1007/s10539-014-9441-3]

Kellis, M., Wold, B., Snyder, M. P., Bernstein, B. E., Kundaje, A., Marinov, G. K., Ward, L. D., Birney, E., Crawford, G. E., Dekker, J., Dunham, I., Elnitski, L., Farnham, E. A., Gerstein, M., Giddings, M. C., Gilbert, D. M., Gingeras, T. R., Green, E. D., Guigo, R., Hubbard, T., Kent, J., Lieb, J. D., Myers, R. M., Pazin, M. J., Ren, B., Stamatoyannopoulos, J. A., Weng, Z., White, K. P., and Hardison, R. C. (2014) Defining functional DNA elements in the human genome. Proceedings of the National Academy of Sciences, 111:6131-6138. [doi: 10.1073/pnas.131894811]

The ENCODE Project Consortium (2012) An integrated encyclopedia of DNA elements in the human genome. Nature, 489:57-74. [doi: 10.1038/nature11247]


Alternative splicing and the gene concept

I just learned about a workshop scheduled for the end of this month. The topic is: Evolutionary Roles of Transposable Elements and Non-coding DNA: The Science and the Philosophy.

I'd love to attend but it's a just small workshop designed to encourage dialogue between scientists and philosophers who are interested in the topic. Here's a list of the speakers ...
  • Ryan Gregory: Junk DNA, genome size, and the onion test.
  • Stefan Linquist: Four decades debating junk DNA and the Phenotype Paradigm is (somehow) alive and well.
  • Chris Ponting: 92.9% of the human genome evolved neutrally.
  • Paul Griffiths: Both adaptation and adaptivity are relevant to diagnosing function.
  • Ford Doolittle: Selfish genes and selfish DNA: is there a difference?
  • Justin Garson: Biological functions, the liberality problem, and transposable elements.
  • Joyce Havstad: Evolutionary Thinking about Critique of Function Talk.
  • Guillame Bourque: Impact of transposable elements on human gene regulatory networks.
  • Ulrich Stegman: On parity, genetic causation and coding.
  • Steven Downes: Understanding non-coding variants as disease risk alleles.
  • Alexander Palazzo: How nuclear retention and cytoplasmic export of RNAs reduces the deleteriousness of junk DNA.
  • David Haig: Pax somatica
  • Cedric Feschotte: Transposable elements as catalysts of genome evolution.
There's a reading list for the workshop and several of the papers are new to me [Recommended Reading]. I was particularly interested in one of the papers by Stephen M Downes, a philosopher at the University of Utah and one of the participants in the upcoming workshop.
Downes, S.M. (2004) Alternative splicing, the gene concept, and evolution. History and philosophy of the life sciences:91-104. [PDF]
The paper discusses two of my favorite topics: alternative splicing and "what is a gene?" Another philosopher who's interested in defining the biological gene is Paul Griffiths and he will also be at the meeting. I remember talking to Paul and Karola Stotz at the junk DNA meeting in London a few years ago where I tried to explain that alternative splicing may not be real. They were not convinced.

Paul and Karola have written a book about genes where they claim that recent discoveries in genomics, including abundant alternative splicing, have overthrown the standard definition of a molecular gene. Their view on the importance of alternative splicing is not substantially different from that expressed by Stephen Downes in his 2004 paper so I'll concentrate on that paper.

Downes claims that the human proteome is enormously more complex than the number of genes would suggest. He is repeating a claim that, even today, is popular in the scientific literature. That doesn't make it true: in fact, there is no scientific evidence to support such a claim and plenty to refute it [The proteome complexity myth] [How many proteins in the human proteome?]. Downes goes on to offer an explanation for this imagined disparity between the number of genes and the number of proteins: the explanation is alternative splicing.

Giffiths and Stotz make the same argument on page 69 of their book ...
Another discovery of the postgenomic era has been the discrepancy between the number of genes in a genome and the number of products derived from them. For example, the human proteome outnumbers the number of discrete protein-coding genes by at least one order of magnitude, The human genome contains in the region of 20-25,000 genes (the correct number is still not known), while predictions have given numbers as high as 1 million proteins (Mueller et al., 2007). As we will show at length in 4.4 and 4.5, this discrepancy is explained by the fact that cellular mechanisms use the same coding region to make many different products and combine resources from different coding regions to make products.
I don't believe that there's a serious discrepancy that needs explaining. The reference quoted by Griffiths and Stotz does, indeed, make the claim that there may be up to one million different proteins in human cells but it's important to understand where this estimate comes from. Here's what Mueller et al. say in their review,
The relatively low number of human genes suggests that complexity of human biology is achieved through regulation on the transcriptional, post-transcriptional and post-translational level. Alternative splicing and translation as well as post-translational modification (e.g.: phosphorylation, glycosylation and proteolytic cleavage) both contribute to a “proteomic stratification” process that produces a protein population with a diversity that is several orders of magnitude higher than that of the number of genes encoding them. Correspondingly, it has been estimated that the human proteome comprises up to 1,000,000 protein species.
It looks like the estimate of one million different proteins is partly based on the assumption that alternative splicing is a real phenomenon in which case using the estimate to support the idea of alternative splicing seems like a failure in logic. But we don't need to quibble about "estimates" because there's real data to consider (see below).

Setting aside alternative splicing, there's still a major flaw in the argument that an enormous proteome requires rethinking fundamental concepts. Most of the Mueller et al. article is devoted to post-translational modifications that have been understood for decades. If every one of the 20,000 gene products have 50 such variants then there would be one million different protein species in the genome but, if true, this is not a "discrepancy" and it would not require any extraordinary explanation like alternative splicing. In other words, there's no mystery that needs explaining.

However, even the idea that the average polypeptide gene product gives rise to 50 different post-translational functional variants is ridiculous. For example, it would mean that each of the enzymes of the glycolytic pathway and the citric acid cycle have, on average, 50 different variants. These enzymes have been studied for half a century and there's no evidence to support such a claim. There's no evidence that every one of the subunits of RNA polymerases have 50 different variants nor is there any evidence that the subunits of the mitochondrial electron transport complexes exist in 50 different biologically relevant variations.

So, we can dismiss one of the major rationalizations for abundant alternative splicing but that doesn't mean that alternative splicing has been disproved. For that we have to look at the direct evidence. The evidence for abundant transcript variants for each multi-exon gene is solid. The important question is whether these variants are just the result of sloppy splicing, in which case they are junk RNA, or whether they are biologically relevant RNAs with a function, in which case they are genuine examples of alternative splicing.

Several groups have used sophisticated techniques to look for the alternative splice variants and they haven't found them [How many proteins in the human proteome?]. For those who are interested in seeing the actual experimental evidence, I recommend a paper by Bhuiyan et al. (2018). They say,
In this paper we take steps to address the gap between the commonplace assumption that most genes have more than one distinct functional product and evidence-based reality.
The "evidence-based reality" is that only ~5% of curated genes produce functionally diverse isoforms. In other words, massive alternative splicing is not supported by the available evidence. Most transcript variants are junk RNA produced by splicing errors.

The gene annotators have already decided that the vast majority of transcript variants are due to splicing errors. They have been purged from the databases. A typical gene in the genome database now has only two or three potential variants and most of those have not been shown to have a function. It's quite reasonable to hypothesize that only 5% of human protein-coding genes are involved in alternative splicing to produce two or more functional protein variants.

I've covered this debate in a series of post from last year so I won't repeat the arguments here [Are splice variants functional or noise?].1

I believe I'm correct when I say that genuine alternative splicing is not a widespread phenomenon. I'm absolutely certain I'm correct when I say that there's no evidence supporting the claim that almost most all genes are alternatively spliced and that the average gene produces ten or more different functional variants.

That's not the point I'm trying to make. My main argument with philosophers who write about the gene concept is that they are uncritically accepting outlandish claims without considering alternative explanations. It may be true that every gene produces multiple splice variants with multiple promoters and transcription termination sites in which case we may or may not need to revise our definition of a gene. However, it may also be true that those variants just represent sloppy biology and they have no biological function, in which case we don't need to upend our understanding of the molecular gene.

It's wrong for philosophers (and scientists) to just assume that one of those possibilities is correct and then use that, possibly incorrect, assumption to re-define the gene. Real philosophers (and scientists) should be absolutely sure of their facts before making such a radical proposal.

P.S. I define a gene as, "A gene is a DNA sequence that is transcribed to produce a functional product." [Debating philosophers: The molecular gene] [Philosophers talking about genes] [What Is a Gene?]. The functional product is RNA and it may be further processed to give rise to ribosomal RNA, snoRNA, or any number of other functional RNAs. It may also give rise to mRNA that's then translated to produce a protein. There are many genuine examples of alternative splicing but that doesn't affect my definition of a gene. It just means that the primary transcript (= functional product) can be subsequently processed in several different ways.


1. [Debating alternative splicing (part I)] [Debating alternative splicing (part II)] [Debating alternative splicing (Part III)] [Debating alternative splicing (Part IV)]

Bhuiyan, S.A., Ly, S., Phan, M., Huntington, B., Hogan, E., Liu, C.C., Liu, J., and Pavlidis, P. (2018) Systematic evaluation of isoform function in literature reports of alternative splicing. BMC Genomics 19:637. [doi: 10.1186/s12864-018-5013-2]

Mueller, M., Martens, L., and Apweiler, R. (2007) Annotating the human proteome: Beyond establishing a parts list. Biochimica et Biophysica Acta (BBA) - Proteins and Proteomics, 1774(2):175-191. [doi: 10.1016/j.bbapap.2006.11.011]

One philosopher’s view of random genetic drift

Random genetic drift is the process whereby some allele frequencies change in a population by chance alone. The alleles are not being fixed or eliminated by natural selection. Most of the alleles affected by drift are neutral or nearly neutral with respect to selection. Some are deleterious, in which case they may be accidentally fixed in spite of being selected against. Modern evolutionary theory incorporates random genetic drift as part of population genetics and modern textbooks contain extensive discussions of drift and the influence of population size. The scientific literature has focused recently on the Drift-Barrier Hypothesis, which emphasizes random genetic drift [Learning about modern evolutionary theory: the drift-barrier hypothesis].

Most of the alleles that become fixed in a population are fixed by random genetic drift and not by natural selection. Thus, in a very real sense, drift is the dominant mechanism of evolution. This is especially true in species with large genomes full of junk DNA (like humans) since the majority of alleles occur in junk DNA where they are, by definition, neutral.1 All of the data documenting drift and confirming its importance was discovered by scientists. All of the hypotheses and theories of modern evolution were, and are, developed by scientists.

Nothing in biology makes sense except in the light of population genetics.

Michael Lynch
You might be wondering why I bother to state the obvious; after all, this is the 21st century and everyone who knows about evolution should know about random genetic drift. Well, as it turns out, there are some people who continue to make silly statements about evolution and I need to set the record straight.

One of those people is Massimo Pigliucci, a former scientist who's currently more interested in the philosophy of science. We've encountered him before on Sandwalk [Massimo Pigliucci tries to defend accommodationism (again): result is predictable] [Does Philosophy Generate Knowledge?] [Proponents of the Extended Evolutionary Synthesis (EES) explain their logic using the Central Dogma as an example]. I looks like Pigliucci doesn't have a firm grip on modern evolutionary theory.

His main beef isn't with evolutionary biology. He's mostly upset about the fact that science as a way of knowing is extraordinarily successful whereas philosophy isn't producing many results. He loves to attack any scientist who points out this obvious fact. He accuses them of "scientism" as though that's all it takes to make up for the lack of success of philosophy. His latest rant appears on the Blog of the American Philosophers Association: The Problem with Scientism.

I'm not going to deal with the main part of his article because it's already been covered many times. However, there was one part that caught my eye. That's the part where he lists questions that science (supposedly) can't answer. The list is interesting. Pigliucci says,
Next to last, comes an attitude that seeks to deploy science to answer questions beyond its scope. It seems to me that it is exceedingly easy to come up with questions that either science is wholly unequipped to answer, or for which it can at best provide a (welcome!) degree of relevant background knowledge. I will leave it to colleagues in other disciplines to arrive at their own list, but as far as philosophy is concerned, the following list is just a start:
  • In metaphysics: what is a cause?
  • In logic: is modus ponens a type of valid inference?
  • In epistemology: is knowledge “justified true belief”?
  • In ethics: is abortion permissible once the fetus begins to feel pain?
  • In aesthetics: is there a meaningful difference between Mill’s “low” and “high” pleasures?
  • In philosophy of science: what role does genetic drift play in the logical structure of evolutionary theory?
  • In philosophy of mathematics: what is the ontological status of mathematical objects, such as numbers?
[my emphasis LAM]
Before getting to random genetic drift, I'll just note that my main problem with Pigliucci's argument is that there are other definitions of science that render his discussion meaningless. For example, I prefer the broad definition of science—the one that encompasses several of the Pigliucci's questions [Alan Sokal explains the scientific worldview][Territorial demarcation and the meaning of science]. The second point is that no matter how you define knowledge, philosophers haven't been very successful at adding to our knowledge base. They're good at questions (see above) but not so good at answers. Thus, it's reasonable to claim that science (broad definition) is the only proven method of acquiring knowledge. If that's scientism then I think it's a good working hypothesis.

Now back to random genetic drift. Did you notice that one of the questions that science is "wholly unequiped" to answer is the following: "what role does genetic drift play in the logical structure of evolutionary theory?" Really?

Pigliucci goes on to explain what he means ...
The scientific literature on all the above is basically non-existent, while the philosophical one is huge. None of the above questions admits of answers arising from systematic observations or experiments. While empirical notions may be relevant to some of them (e.g., the one on abortion), it is philosophical arguments that provide the suitable approach.
I hardly know what to say.

How many of you believe that the following statements are true with respect to random genetic drift and evolutionary theory?
  1. The scientific literature on all the above is basically non-existent.
  2. The philosophical literature is huge.
  3. The question does not admit of answers arising from systematic observations or experiments.
  4. It is philosophical arguments that provide the suitable approach.


1. There are some very rare exceptions where a mutation in junk DNA may have detrimental effects.

A philosopher defends agnosticism

Paul Draper is a philosopher at Purdue University (West Lafayette, Indiana, USA). He has just (Aug. 2, 2017) posted an article on Atheism and Agnosticism on the Stanford Encyclopedia of Philosophy website.

Many philosphers use a different definition of atheism than many atheists. Philosophers tend to define atheism as the proposition that god(s) do not exist. Many atheists (I am one) define atheism as the lack of belief in god(s). The distinction is important but for now I want to discuss Draper's defense of agnosticism.

Keep in mind that Draper defines atheism as "god(s) don't exist." He argues, convincingly, that this proposition cannot be proven. He also argues that theism—the proposition that god(s) exist—can also not be proven. Therefore, the only defensible position for a philosopher like him is agnosticism.

But there's a problem ... and it's similar to the one concerning the definition of atheism. Here's one way to describe an agnostic according to Draper.
... an agnostic is a person who has entertained the proposition that there is a God but believes neither that it is true nor that it is false. Not surprisingly, then, the term “agnosticism” is often defined, both in and outside of philosophy, not as a principle or any other sort of proposition but instead as the psychological state of being an agnostic. Call this the “psychological” sense of the term. It is certainly useful to have a term to refer to people who are neither theists nor atheists, but philosophers might wish that some other term besides “agnostic” (“theological skeptic”, perhaps?) were used.
I wonder if there are any agnostics who adhere to this definition? Most people will, after considering the question, reach a conclusion about whether god(s) exist or not regardless of whether the conclusion can be rigorously defended. Most of those who choose to call themselves agnostics will have concluded that there are no god(s) and will act out their lives accordingly. They are atheists by my definition.

But this is not the definition of agnosticism that Draper prefers.
If, however, “agnosticism” is defined as a proposition, then “agnostic” must be defined in terms of “agnosticism” instead of the other way around. Specifically, “agnostic” must be defined as a person who believes that the proposition “agnosticism” is true instead of “agnosticism” being defined as the state of being an agnostic. And if the proposition in question is that neither theism nor atheism is known to be true, then the term “agnostic” can no longer serve as a label for those who are neither theists nor atheists since one can consistently believe that atheism (or theism) is true while denying that atheism (or theism) is known to be true.
I know a theist who is content to call himself an agnostic because he cannot prove the existence of his preferred god(s) even though he believes in them and acts accordingly. Similarly, there are many nonbelievers (atheists by my definition) who will accept the proposition that neither the existence nor the nonexistence of god(s) is knonw to be true for an absolute fact. Thus, you can have believers in god(s) who are agnostics and nonbelievers in god(s) who are agnostic.

This is why Dawkins refers to himself as an an agnostic atheist.

The simplest argument for this version of agnosticism is that you cannot prove a negative. Thus, although you can, in theory, prove that god(s) exist, you can never prove that they don't exist. If you define atheism as the belief that god(s) don't exist then that version of atheism is logically indefensible if you are in a philosophy class. In the real world, probabilities count so that if something is extremely improbable you can reasonably maintain that it doesn't exist. You can certainly act and behave as if it doesn't exist. We do that all the time. I'm not worried about being abducted by aliens in near-Earth orbit. See Russell's teapot.

I don't think philosophers like that argument so they look for better ways to defend agnosticism. Here's how Paul Draper does it ....
4. An Argument for Agnosticism
According to one relatively modest form of agnosticism, neither versatile theism nor its denial, global atheism, is known to be true. Robin Le Poidevin (2010: 76) argues for this position as follows:
  • (1)There is no firm basis upon which to judge that theism or atheism is intrinsically more probable than the other.
  • (2)There is no firm basis upon which to judge that the total evidence favors theism or atheism over the other.
It follows from (1) and (2) that
  • (3)There is no firm basis upon which to judge that theism or atheism is more probable than the other.
It follows from (3) that
  • (4)Agnosticism is true: neither theism nor atheism is known to be true.
In my experience, the vast majority of agnostics, including agnostic philosophers, have judged that there are no gods. Unless they are being totally irrational, this means they have reached a conclusion concerning the existence of god(s) because they don't act as if they existed. Presumably they must have a reason for reaching this conclusion even if it's only a tentative conclusion.

I assume their reasons are the same as mine—there's no believable evidence for the existence of god(s) so there's no reason to believe in them. The evidence strongly favors the proposition that god(s) don't exist.

I reject propositions #1 and #2. I think there IS firm basis for judging that god(s) don't exist. Part of that "firm basis" is because of my understanding of how the natural universe works and my understanding of the main arguments for the existence of god(s). I reject conclusion #3 because there IS firm basis for judging that god(s) don't exist.

Therefore, in my opinion, strict agnosticism of this sort is false because nonexistence of god(s) is far more probable than existence of god(s). I have to ask myself why philosophers argue this way. I think it's because they want to set up a rigorous logical proof of their propositions and conclusions. They are uncomfortable with probabilities and they aren't overly concerned about how people behave in the real world where these discussions play out.

I think my view is similar to pragmatism but, as usual, when you read what philosophers have to say about a viewpoint it becomes very confusing.