Unattested character states

In an earlier post from January 2016, I argued that it is important to account for directional processes when modeling language history through character-state evolution. In previous papers (List 2016; Chacon and List 2015), I  tried to show that this can be easily done with asymmetric step matrices in a parsimony framework. Only later did I realize that this is nothing new for biologists who work on morphological characters, thus supporting David's claim that we should not compare linguistic characters with the genotype, but with the phenotype (Morrison 2014). Early this year, a colleague introduced me to Mk-models in phylogenetics, which were first introduced by Lewis (2001)) and allow analysis of multi-state characters in a likelihood framework.

What was surprising for me is that it seems that Mk-models seem to outperform parsimony frameworks, although being much simpler than elaborate step-matrices defined for morphological characters (Wright and Hillis 2014). Today, I read that a recent paper by Wright et al. (2016) even shows how asymmetric transition rates can be handled in likelihood frameworks.

Being by no means an expert in phylogenetic analyses, especially not in likelihood frameworks, I tend to have a hard time understanding what is actually being modeled. However, if I correctly understand the gist of the Wright et al. paper, it seems that we are slowly approaching a situation in which more complex scenarios of lexical character evolution in linguistics no longer need to rely on parsimony frameworks.

But, unfortunately, we are not there yet; and it is even questionable whether we will ever be. The reason is that all multi-state models that have been proposed so far only handle transitions between attested characters: unattested characters can neither be included in the analyses nor can they be inferred.
I have pointed to this problem in some previous blogposts, the last one published in June, where I mentioned Ferdinand de Saussure, (1857-1913), who postulated two unattested consonantal sounds for Indo-European (Saussure 1879), of which one was later found to have still survived in Hittite, a language that was deciphered and shown to be Indo-European only about 30 years later (Lehmann 1992: 33).

The fact that it is possible to use our traditional methods to infer unattested sounds from circumstantial evidence, but not to include our knowledge about them into phylogenetic analyses, is a huge drawback. Potentially even greater are the situations where even our traditional methods do not allow us to infer unattested data. Think, for example, of a word that was once present in some language but was later completely lost. Given the ephemeral nature of human language, we have no way to know this, but we know very well that it easily happens when just thinking of some terms used for old technology, like walkman or soon even iPod, which the younger generations have never heard about.

Colleagues with whom I have discuss my concerns in this regard are often more optimistic than I am, saying that even if the methods cannot handle unattested characters they could still find the major signal, and thus tell us at least the general tendency as to how a language family evolved. However, for classical linguists, who can infer quite a lot using the laborious methods that still need to be applied manually, it leaves a sour taste, if they are told that the analysis deliberately ignored crucial aspects of the processes and phenomena they understand very well. For example, if we detect that some intelligence test is right in about 80% of all cases, we would also abstain from using it to judge who we allow to take up their studies at university.

I also think that it is not a satisfying solution for the analysis of morphological data in biology. It is probably quite likely that some ancient species had certain traits which later evolved into the traits we observe which are simply no longer attested anywhere, either in fossils or in the genes. I also wonder how well phylogenetic frameworks generally account for the fact that what the evidence we are left with may reflect much less of what was once there.

In Chacon and List (2015), we circumvent the problem by adding ancestral but unattested sounds to the step matrices in our parsimony analysis. This is of course not entirely satisfactory, as it adds a heavy bias to the analysis of sound change, which no longer tests for all possible solutions but only for the ones we fed into the algorithm. For sound change, it may be possible to substantially expand the character space by adding sounds attested across the world's languages, and then having the algorithms select the most probable transitions. But given that we still barely know anything about general transition probabilities of sound change, and that databases like Phoible (Moran 2015)  list more than 2,000 different sounds for a bit more than 2,000 languages, it seems like a Sisyphean challenge to tackle this problem consistently.

What can we do in the meantime? Not very much, it seems. But we can still try to improve our methods in baby steps, trying to get a better understanding of the major and minor processes in linguistic and biological evolution; and not forgetting that, although I was only talking about phylogenetic tree reconstruction, in the end we also want to have all of this done in network approaches.

  • Chacon, T. and J.-M. List (2015) Improved computational models of sound change shed light on the history of the Tukanoan languages. Journal of Language Relationship 13: 177-204.
  • Lehmann, W. (1992) Historical linguistics. An Introduction. Routledge: London.
  • Lewis, P. (2001) A likelihood approach to estimating phylogeny from discrete morphological character data. Systematic Biology 50: 913-925.
  • List, J.-M. (2016) Beyond cognacy: Historical relations between words and their implication for phylogenetic reconstruction. Journal of Language Evolution 1: 119-136.
  • Moran, S., D. McCloy, and R. Wright (eds) (2014) PHOIBLE Online. Max Planck Institute for Evolutionary Anthropology: Leipzig.
  • Morrison, D.A. (2014) Are phylogenetic patterns the same in anthropology and biology? bioRxiv.
  • Saussure, F. (1879) Mémoire sur le système primitif des voyelles dans les langues indo-européennes. Teubner: Leipzig.
  • Wright, A. and D. Hillis (2014) Bayesian analysis using a simple likelihood model outperforms parsimony for estimation of phylogeny from discrete morphological data. PLoS ONE 9.10. e109210.
  • Wright, A., G. Lloyd, and D. Hillis (2016) Modeling character change heterogeneity in phylogenetic analyses of morphology through the use of priors. Systematic Biology 65: 602-611.

The ‘In Principle’ Objection to Privatisation

A few years ago, they privatised the Irish water supply. Rather than water being a freely provided public service, funded out of general taxation, water was now to be a privately supplied good, with each household paying an annual fee that varied depending on usage. It proved to be quite a controversial move, leading to numerous protests and a significant loss of legitimacy for the government. So much so, in fact, that the future of water privatisation in Ireland is currently uncertain.

The privatisation of formerly public services often proves controversial. Privatisation is a major feature of the so-called ‘neo-liberal’ agenda. It is often favoured by economists and policy wonks on the grounds of efficiency. The typical argument is this: if we learned nothing else from the history communism and socialism, it is that government agencies aren’t particularly good a supplying scarce resources. The incentives are out of whack. They are often hugely wasteful, and tend to over-supply or under-supply goods and services. Private agents, motivated by profit and incentivised by prices, are often much more efficient, supplying just as much as the market demands, at a price that maximises societal welfare. (Note: this isn’t always true: for certain goods and services even classical economists will agree that private markets can fail to be efficient — I talk about this in more detail in my posts on Hayek’s famous argument against centrally planned economies).

And yet the process still proves controversial, with many philosophers and activists resisting the wave of privatisation. Sometimes their arguments are strictly empirical in nature: they disagree with the economists and policy wonks who insist upon the efficiency of private markets and the inefficiency of the state. And there are, indeed, empirical studies that support their disagreements. But sometimes their arguments are more philosophical or normative in nature: they hold that, irrespective of the consequences of privatisation, there is something morally circumspect about process. It leads to the selling off of the public sphere and the erosion of public authority and legitimacy. They challenge privatisation on principled grounds, not empirical ones.

Avihay Dorfman and Alon Harel are possibly the leading defenders of this ‘in principle’ objection to privatisation. Over the past few years, they have authored a number of papers that try, with increasing degrees of sophistication and rigour, to present a robust, non-empirical objection to privatisation. They argue that there are some ‘intrinsically public goods’ that should never be handed over to private agents, and they work hard to identify the key properties of these goods.

Although I cannot hope to do justice to the full body of their work on this topic, I do want to look at their main line of argument in the remainder of this post. I do so by analysing and evaluating one of their most recent papers, entitled ‘Against Privatisation as Such’, which appeared in the Oxford Journal of Legal Studies in 2016. Their argument in that paper is that privatisation is objectionable because it undermines public engagement with and responsibility for certain kinds of decision.

1. Two Conceptions of Privatisation
To understand the argument, we first need to understand what it means to privatise something. We all have an intuitive and commonsense understanding of what this entails. The opening example of the privatisation of Irish water gives us some sense of what happens. When the government ‘privatises’ the provision of a particular good or service, it transfers decision-making authority for the provision of that good or service to a private agent (company/corporation). This private agent will then follow a slightly different set of incentives/reasons than a public agency would when supplying the good or service. The hope is that they will follow a set of incentives/reasons that enables them to supply the good or service in a more efficient manner.

This gives us two distinct ways of understanding the process of privatisation. Both of these have been identified and discussed in the academic debate:

The Reasons View - To privatise the provision of a good or service, X, is to change (wholly or partially) the reasons for which someone supplies that good or service.

The Agency View - To privatise the provision of a good or service, X, is to transfer the decision-making authority in relation to that good or service to a private agent.

Dorfman and Harel note that many theorists seem to endorse the Reasons View, and in doing so they often stumble upon an interesting way in which to defend privatisation. One of the features of the Reasons View is that it pays little attention to the identity of the decision-maker when it comes to classifying a particular decision as being ‘private’ or ‘public’. Instead, it focuses on the reasons utilised by the decision-maker. So on this view what is distinctive about private decision-making processes is that they are motivated by things like profit and loss and other relevant economic variables, and not by concerns like fairness, justice and the common good. These concerns are more typically associated with public decision-making processes.

One of the consequences of this method of categorisation is that it is possible for private agents to act in a public-spirited way (or vice versa, i.e. for public agents to act in a privatised way). You could imagine a private company being contracted by the government to provide a good or service on the basis that they direct themselves to the common good. You could also imagine private companies and contractors acting for a combination of reasons, some of which are strictly economic in nature and others of which are more public spirited. Indeed, this might lead you defend privatisation on the grounds that it gives you the ‘best of both worlds’: It brings the efficiencies of the private sector without necessarily losing the public touch.

The essence of Dorfman and Harel’s case against privatisation is that the Reasons View gets it wrong. Privatisation is not solely or even primarily about changing the reasons for which a decision is made; it is really about the transferal of authority. This means that the Agency View is more correct, and when you understand privatisation in terms of agency, you begin to see why it might be objectionable in principle: because it changes the nature and locus of legitimacy in society.

The defence of this argument comes in two parts. The first part highlights the flaws in the Reasons View; the second explains why the transferal of authority is so problematic.

2. Against the Reasons View
Dorfman and Harel don’t present their case against the Reasons View in formal terms, but I’m going to do so, for ease of exposition. Their argument is a very simple one and works like this:

  • (1) If the Reasons View of privatisation were correct, then all we should care about (when it comes to debating the pros and cons of the process) are the reasons motivating the decisions, not who makes them.

  • (2) We do not only care about the reasons motivating particular decisions; we also care about the identity of the agent making those decisions.

  • (3) Therefore, the Reasons View of privatisation must be incorrect.

The key to this argument is the second premise. Dorfman and Harel develop a few lines of support for this premise. One of them is to look at how people talk about privatisation. They consider the work of Richard Bauman, who once identified five characteristics of privatisation:

(1) the complete or partial sell-off (through asset or share sales) of major public enterprises; (2) the deregulation of a particular industry; (3) the commercialization of a government department; (4) the removal of subsidies to producers; and (5) the assumption by private operators of what were formerly exclusively public services, through, for example, contracting out. 
(Bauman 2000, 2 - sourced in Dorfman and Harel 2016)

They argue that it is difficult to make sense of Bauman’s five characteristics if you favour the Reasons View. While some the characteristics could be understood in terms of changing the reasons for which a decision is made (specifically, characteristics 2, 3 and 4), others cannot. Indeed, the other characteristics are probably best understood in terms of the Agency View. Dorfman and Harel then argue that if someone like Bauman wished to stick with the Reasons View he would need to explain away the fact that some of relevant characteristics of privatisation are concerned with agency.

Another line of argument comes from the debate about the normative justification for punishment. The typical rationales for punishment are either retributive or consequentialist in nature. The retributive rationale holds that it is intrinsically good to punish people in proportion to their wrongdoing. The consequentialist rationale holds that punishment is justified if it achieves some desirable end (e.g. deterrence). Both rationales are, to some extent, concerned with the reasons motivating the decision to punish. This might suggest that the debate about the justification of punishment plays out in an arena that is shaped by the Reasons View. But this is not the case. Many of the participants in the debate about the normative justification of punishment, be they retributive or consequentialist in their leanings, hold that there is another condition that must be satisfied before punishment can be legitimate. They hold that the punishment must be administered by a public official. Indeed, most theorists of punishment implicitly or explicitly assume that private individuals are never the appropriate administrators of punishment. This is why they usually balk at the notion of individuals taking it upon themselves to punish wrongdoers (so-called ‘vigilante justice’). It is difficult to explain this in terms of the Reasons View.

A final line of support comes from legal doctrine. Dorfman and Harel reference two cases in their article that highlight how courts often reject the privatisation of public services on grounds that are unrelated to the Reasons View. One case comes from the Indian Supreme Court and had to do with outsourcing of police services on short-term contracts. The court rejected this practice on the grounds that policing was something that must ‘necessarily…be delivered by forces that are and personnel who are completely under the control of the state’ (Dorfman and Harel 2016, 410). The other case is Israeli and had to do with the privatisation of prison services. This was rejected by the court on the grounds that only public officials are normatively competent to deny someone’s liberty.

In short, Dorfman and Harel reject the Reasons View because it conflicts with how we characterise and critique the process of privatisation. They argue that if you pay attention to both of these things you will find that the identity of the agent is a paramount concern.

3. In Defence of the Agency View
The preceding line of argument only gets us so far. We have cause to reject the Reasons View, and we have found considerable concern with the identity of the agent making the decisions. The problem is that this concern is somewhat mysterious in character. Why must punishment (or whatever) be administered by a public agent? The defender of the Agency View owes us some account of that.

Dorfman and Harel try to step up to the plate and provide us with exactly that. Their argument is quite complex and convoluted. I’ll present a simplified version of it here. In essence, it has to do with the need for certain decisions (punishment being a good example) to be publicly legitimised. If you are going to make decisions that could harm, deprive, or redistribute core rights and responsibilities, you need for those decisions to be publicly legitimate. The problem, according to Dorfman and Harel, is that you will never get that legitimacy if you privatise those decisions. The reason for this is that privatisation necessarily undermines and erodes public responsibility for, and engagement with, the relevant decision, both of which are essential for legitimacy.

The argument seems to work like this:

  • (4) In order for particular decisions to have public legitimacy they must emanate from the public (i.e. the public must be engaged with the decision-making process and must be able to take responsibility for the decision).

  • (5) In order for a decision to emanate from the public it must be made by an agent who defers to the public in a particular way.

  • (6) A private agent, contracted to make those decisions, cannot defer to the public in the right way.

  • (7) Therefore, privatised decision-making lacks the legitimacy that is required for particular decisions.

You could challenge several aspects of this argument. It is certainly a little bit sketchy about which decisions require this form of legitimation, and there is plenty of disagreement about the conditions that must be satisfied in order for a decision to be legitimate in the academic literature. Nevertheless, most of the action in Dorfman and Harel’s article centres on premises (5) and (6).

Let’s consider premise (5). Many ‘public’ decisions are made by individual decision-makers (public officials, government ministers, government agencies, etc.). Members of the general public may have little direct influence and involvement in those decisions. Nevertheless, in order for the decisions to be legitimate, the individual decision-maker must defer to the public in making the decisions. They must see and understand themselves to be public servants. They must take the public’s views into consideration and be answerable to the public for what they do. In short, they must be part of a normative community/institution that engages with and answers to the general public. That’s the view that underlies premise (5).

Premise (6) is then defended on the grounds that private agents, who are contracted to make similar decisions, can never show the right kind of deference. This is fleshed out in what Dorfman and Harel term the ‘different contracts’ argument. A public official is employed under a particular set of norms, norms that require their integration into and answerability to the general public. A private agent is employed under a contractual agreement. Their duties and obligations will be set by the terms and conditions of that contractual agreement. Their duty is not to the general public, it is to the contract. There is consequently distance between them and the general public, not deference.

You might respond to this by arguing that deference to the public could be built into the terms and conditions of the contractual agreement. But Dorfman and Harel argue that this won’t work. They argue that every privatisation agreement will have to afford the private agent a ‘zone of permissibility’ (or ‘autonomy’) in which they are free to exercise their own judgment about what to do. This zone of permissibility will necessarily remove them from the kind of public deference that is required. The reason for this is that in order to make any sense at all, a privatisation agreement must defer to the skills and judgment of the private agent. Recall, that the whole point of privatisation is that private agents are able to provide a good or service more efficiently than public agents. This necessarily implies a zone of permissibility. The problem is that within that zone of permissibility, the private agent will have an ‘immunity’ or ‘claim right’ against the state: they will be legally entitled to resist state interference and direction. This is what distances them from the public.

Dorfman and Harel concede that, in principle, a private agent could be fully integrated into a public agency, but they argue that in such as case the private agent would cease to be ‘private’. They also concede that there are some seemingly public agencies that have zones of autonomy that seal them off from political interference. They give the example of an independent election monitoring agency as an example. But they argue that such agencies are not ‘private’ in any meaningful sense. They still serve and engage with the public in the fulfilment of their duties. They are not employed or constituted under the same kind of private contractual agreement as a private agent.

4. Concluding Thoughts
That’s Dorfman and Harel’s argument in very broad outline. Suffice to say there is a lot of detail and nuance missing from this summary. For those who want that detail and nuance, I recommend reading the original paper. Granting that my summary is imperfect, I nevertheless want to close with three critical reflections on the argument presented above.

First, I’m not sure I am entirely convinced by the whole ‘different contracts’ line of argument. It seems to me that contracts of employment (or service provision) are pretty fluid things. There is a classical notion of contracts that views them as strictly private agreements, untethered from general norms and outside influences. But this classical notion is obviously flawed, and certainly does not represent any contemporary legal position on the nature of a contractual agreement. Nowadays, so-called private contractual agreements are frequently influenced, directed and constrained by public policy. Consumer protection legislation, for example, severely restricts what can be put into a private contract. Given this, I’m not sure why the contractual agreement underlying a privatisation arrangement couldn’t be heavily influenced and constrained by public policy, and, more importantly, why the contract couldn’t insist on deference to the public. I’m also not sure that I am convinced by the claim that a ‘zone of permissibility’ make that big a difference in this regard since, presumably, the employment contracts of public officials will include similar zones of permissibility. That doesn’t completely distance them from the general community, not grant them a relevant claim right.

Second, I’m somewhat puzzled by the distinction between the Reasons View and the Agency View. One problem I have is that Dorfman and Harel’s defence of the Agency View seems to bring them full circle, back to a position that is very similar to the Reasons View. Think about it. When it boils down to it, their major line of objection is that privatisation agreements give private agents a ‘zone of permissibility’ in making decisions. But why is this so objectionable? Because it means they do not defer to the public in making decisions, i.e. they do not take the public appropriately into account in their decision-making. This seems pretty close to saying that the problem is that they don’t act for sufficiently ‘public’ reasons. Now, I’m sure Dorfman and Harel would respond by saying that their argument is not just about reasons, it is also about the formal legal immunities that the contractual agreement would grant the private agent. But if those formal immunities are up for grabs — as I suggested in the previous paragraph — then it seems like reasons for action would be the only relevant difference.

Third, I worry that the argument as a whole is little bit too clever. It seems to come perilously close to being true by definition. Privatisation is being defined as the process whereby decision-making authority is transferred to a private agent through a contractual agreement that distances them from the public. Distancing thus becomes the defining feature of privatisation. And this defining, in turn, is why the process is objectionable. Any so-called privatisation arrangement that doesn’t involve this distancing is not really privatisation at all.

That said, I do find something compelling in the line of argument sketched by Dorfman and Harel. I do think public responsibility and engagement are important in some contexts, and I do worry that privatisation agreements tend to be more corrosive of those virtues. But that’s not to say that they are always and everywhere more corrosive, and that they might not have other, countervailing virtues.

Posted by in Uncategorized


SAGE journal retracts three more papers after discovering faked reviews

SAGE recently retracted three 2015 papers from one of its journals after the publisher found the articles were accepted with faked peer reviews. The retraction notices call out the authors responsible for submitting the reviews. This trio of retractions is the second batch of papers withdrawn by Technology in Cancer Research & Treatment over faked […]

The post SAGE journal retracts three more papers after discovering faked reviews appeared first on Retraction Watch.

Released FDA docs reveal details of agency’s (failed) attempt to retract paper

Earlier this year, a raging controversy regarding a new drug spilled into the pages of a leading medical journal: the head of the U.S. Food and Drug Administration and another official publicly called for the retraction or correction of a peer-reviewed article about the drug. They didn’t get their wish. Now, documents released by the […]

The post Released FDA docs reveal details of agency’s (failed) attempt to retract paper appeared first on Retraction Watch.

08/21/17 PHD comic: ‘Eclipse’

Piled Higher & Deeper by Jorge Cham
Click on the title below to read the comic
title: "Eclipse" - originally published 8/21/2017

For the latest news in PHD Comics, CLICK HERE!

Posted by in Uncategorized


Sunsquatch, the only eclipse map you need

It’s solar eclipse time. There have been a lot of maps leading up to this point, but this one by Joshua Stevens is the only one you really need. The overlap between sasquatch sightings and the total eclipse path.


Posted by in eclipse, maps



Posted by in Uncategorized


Equality is about protection, not love

Again and again I see the same-sex marriage (SSM) debate cast in terms of love. And while I agree that people should be free to love those they wish (if it is mutual, and freely given, and no minors are involved), that is not what the SSM debate is really about. Instead it is about Read More...

Weekend reads: A troubling precedent out of China; journals as corporate tools; postdocs and suicide

The week at Retraction Watch featured the retraction of a paper linked to vaccines, and what happens when a journal retracts 107 papers at once. Here’s what was happening elsewhere: Cambridge University Press pulls hundreds of papers from one of its journals on topics the Chinese government deemed politically sensitive. (Holly Else, Times Higher Education) […]

The post Weekend reads: A troubling precedent out of China; journals as corporate tools; postdocs and suicide appeared first on Retraction Watch.

What’s noise, what’s Illumina bias, and what’s signal?

The PhD student and I are trying to pin down the sources of variation in our sequencing coverage. It's critical that we understand this, because position-specific differences in coverage are how we are measuring differences in DNA uptake by competent bacteria.

Tl;dr:  We see extensive and unexpected short-scale variation in coverage levels in both RNA-seq and DNA-based sequencing. Can anyone point us to resources that might explain this?

I'm going to start not with our DNA-uptake data but with some H. influenzae RNA-seq data.  Each of the two graphs below shows the RNA-seq coverage and ordinary seq coverage of a 3 or 4 kb transcriptionally active segment.

Each coloured line shows the mean RNA-seq coverage for 2 or 3 biological replicates of a particular strain.  The drab-green line is from the parental strain KW20 and the other two are from competence mutants.  Since these genes are not competence genes the three strains have very similar expression levels.  The replicates are not all from the same day, and were not all sequenced in the same batch.  The coloured shading shows the standard errors for each strain.

We were surprised by the degree of variation in coverage across each segment, and by the very strong agreement between replicates and between strains.  Since each segment is from within an operon, its RNA-seq coverage arises from transcripts that all began at the same promoter (to the right of the segment shown).  Yet the coverage varies dramatically.  This variation can't be due to chance differences in the locations and endpoints of reads, since it's mirrored between replicates and between strains.  So our initial conclusion was that it must be due to Illumina sequencing biases.  

But now consider the black-line graphs inset below the RNA-seq lines.  These are the normalized coverages produced by Illumina sequencing of genomic DNA from the same parental strain KW20. Here there's no sign of the dramatic variation seen in the RNA-seq data.  So the RNA-seq variation must not be due to biases in the Illumina sequencing.

Ignore the next line - it's a formatting bug that I can't delete.
<200 and="" from="" in="" lower="" nbsp="" p="" segment.="" segment="" the="" to="" upper="">

How else could the RNA-seq variation arise? 
  • Sequence-specific biases in RNA degradation during RNA isolation?    If this were the cause I'd expect to see much more replicate-to-replicate variation, since our bulk measurements saw substantial variation in the integrity of the RNA preps.
  • Biases in reverse transcriptase?  
  • Biases at the library construction steps?  I think these should be the same in the genomic-DNA sequencing.

Now on to the control sequencing from our big DNA-uptake experiment.

In this experiment the PhD student mixed naturally competent cells with chromosomal DNA, and then recovered and sequenced the DNA that had been taken up.  He sequenced three replicates with each of four different DNA preparations; 'large-' and 'short-' fragment preps from each of two different H. influenzae strains ('NP' and 'GG').  As controls he sequenced each of the four input samples.  He then compared the mean sequencing coverage at each position in the genome to its coverage in the input DNA sample.

Here I just want to consider results of sequencing the control samples.  We only have one replicate of each sample, but the 'large' (orange) and 'short' (blue) samples effectively serve as replicates.  Here's the results for DNA from strain NP.  Each strain's coverage has been normalized as reads per million mapped reads (long: 2.7e6 reads; short: 4.7e6 reads).

The top panel shows coverage of a 1 kb segment of the NP genome.  Coverage is fairly even over this interval, and fairly similar between the two samples.  Note how similar the small-scale variation is; at most positions the orange and blue samples go up and down roughly in unison.  I presume that this variation is due to minor biases in the Illumina sequencing.

The middle panel is a 10 kb segment.  The variation looks sharper only because the scale is compressed, but again the two traces are roughly mirroring each other,

The lower panel is a 100 kb segment.  Again the variation looks sharper, and the traces roughly mirror each other.  Overall the coverage is consistent, not varying more than two-fold.

Now here's the corresponding analysis of variation in the GG control samples.   In the 1 kb plot the very-small-scale position-to-position variation  is similar to that of NP and is mirrored by both samples.  But the blue line also has larger scale variation over hundreds of bp that isn't seen in the orange line.  This '500-bp-scale' variation is seen more dramatically in the 10 kb view.  We also see more variation in the orange line than was seen with NP.  In the 100 kb view we also see extensive variation in coverage over intervals of 10 kb or larger, especially in the blue sample. It's especially disturbing that there are many regions where coverage is unexpectedly low.

The 500-bp-scale variation can't be due to the blue sample having more random noise in read locations, since it actually has four-fold higher absolute coverage than the orange sample.  Here are coverage histograms for all four samples (note the extra peak of low coverage positions in the GG short histogram):

If you've read all the way to here:  You no doubt have realized that we don't understand where most of this variation is coming from.  We don't know why the RNA-seq coverage is so much more variable than the DNA-based coverage.  We don't know how much of the variation we see between the NP samples is due to sequencing biases, or noise, or other factors.  We don't know why the GG samples have so much more variation than the NP samples and so much unexpectedly low coverage.  (The strains' sequences differ by only a few %.)

We will be grateful for any suggestions, especially for links to resources that might shed light on this. 

Later:  From the Twitterverse,  a merenlab blog post about how strongly GC content can affect coverage: Wavy coverage patterns in mapping results.  This prompted me to check the %GC for the segment shown in the second RNA-seq plot above.  Here it is, comparing regular sequencing coverage to %GC:

 I don't see any correlation, particularly not the expected correlation of high GC with low coverage.  Nor is there any evident correlation with the RBNA-seq coverage for the same region.

Posted by in Uncategorized