Journal republishes withdrawn paper on emergency care prices, amid controversy

The Annals of Emergency Medicine has republished a controversial paper it withdrew earlier this year which compared the cost of emergency care at different types of facilities. Because the paper drew heavy criticism when it was originally released, the journal has published a revised version, along with several editorials and discussions between the authors and […]

The post Journal republishes withdrawn paper on emergency care prices, amid controversy appeared first on Retraction Watch.

The Moral Problem of Accelerating Change





Holmes knew that killing people was wrong, but he faced a dilemma. Holmes was a member of the crew onboard the ship The William Brown, which sailed from Liverpool to New York in early April 1842. During its Atlantic crossing, The William Brown ran into trouble. In a tragedy that would repeat itself 70 years later during the fateful first voyage of The Titanic, the ship struck an iceberg off the coast of Canada. The crew and half the passengers managed to escape to a lifeboat. Once there, tragedy struck again. The lifeboat was too laden with people and started to sink. Something had to be done.

The captain made a decision. The crew would have to throw some passengers overboard, leaving them to perish in the icy waters, but raising the level of the boat. It was the only way anyone was going to get out alive. Holmes followed these orders and was complicit in the deaths of 14 people. But the remaining passengers were saved. Holmes and his fellow crew were their saviours. Without doing what they did, everyone would have died. For his troubles, Holmes was eventually prosecuted for murder, but the jury refused to convict him for this. They reduced the conviction to one of manslaughter and Holmes only served six months in jail.

I discuss this case every year with students. Most of them share the jurors’ sense that although Holmes intentionally killed people, he didn’t deserve much blame for his actions. In the context, we would be hard pressed to have done differently. Indeed, many of my students think he should avoid all punishment for his actions.

Holmes’s story illustrates an important point: morality is contextual. What we ought to do is dependent on what is happening around us. Sometimes our duties and obligations can change. You probably don’t think about this phenomenon too much, taking it is a natural and obvious feature of the moral universe, but the contextual nature of morality poses a challenge during times of accelerating technological change.

That’s one of the central ideas motivating Shannon Vallor’s recent book Technology and the Virtues. I’m still working my way through it (I’ve read approximately 65 pages at the time of writing) but it is provoking many thoughts in my mind and I feel I have to get some of them down on the page. This post is my first attempt to do so, examining one of the key arguments developed by Vallor over the opening chapters of the book.

That argument comes in two parts. The first part claims that there is a particularly acute and important moral problem facing us in the modern age. Vallor calls this the problem of ‘acute technosocial opacity’; I’m going to give it a slightly different name: the moral problem of accelerating change. The second part argues for a solution to this problem: developing a technology-sensitive virtue ethics. I’m going to analyse and evaluate both parts of the argument in what follows.

Before I get into the details, a word of warning. What I am about to say is highly provisional. As noted, I’m still reading Vallor’s book. I am very conscious of the fact that the problems I raise with certain aspects of her argument might be addressed later in the book. So take what I am about to say with a hefty grain of salt.


1. The Moral Problem of Accelerating Change
We are living through a time of accelerating technological change. This is one of the central theses of futurists like Ray Kurzweil. In his infamous 2005 book The Singularity is Near, Kurzweil maps out the exponential improvements in various technologies, including computing speed, size and density of transistors, data storage and so on. Some of these improvements are definitely real: Moore’s law — the observation that the number of transistors that can fit on an integrated circuit doubles every two or so years — is the most famous example. But Kurzweil and his fellow futurists take the idea much further, arguing that converging trends in artificial intelligence, biotech, and nanotech hold truly revolutionary potential for human society. Kurzweil believes that we are heading towards a ‘singularity’ where humans and machines will merge together and we will suffuse the cosmos with our intelligence. Others are less optimistic, thinking that the singularity holds much darker promises.

You don’t have to be a fully signed-up Kurzweilian to believe that there is something to the notion of accelerating change. We all have a sense that things are changing pretty quickly. Jobs that were once stable and dependable sources of income have been automated or eliminated. Digital and smart technologies that were non-existent ten years ago are embedding themselves in our daily lives, turning us all into screen-obsessed zombies. This is to say nothing of the advances in other technologies, such as AI, 3-D printing and brain-computer interfaces. You might think that we can handle all this change — that although things are moving quickly they are not moving so quickly that we cannot keep up. But this assessment might be premature. One of the key insights of Kurzweil’s work — one that has been taken onboard by others — is that accelerating change has a way of sneaking up on us. A doubling of computer speed year-on-year is not that spectacular for the first few years, particularly if you start from a low baseline, but after ten or twenty years the changes become truly astronomical. It’s like that old puzzle about the lily pad that doubles in size every day. If it covers half the pond on day 47 when does it cover the entire pond? Answer: on day 48. One more day is enough to completely wipe out the pond.

Accelerating change poses a significant moral challenge. We all seek moral guidance — even the committed moral relativists among us try to figure out what they ought to do. But noted in the introduction, moral guidance is often contextual. It depends, critically, on two variables: (i) what is happening in the world around us and (ii) what is within our power to control. Once upon time, no one would have said that you had a moral obligation to vaccinate your children. It wasn’t within your power to do so. But with the invention of vaccines for the leading childhood illnesses, as well as the copious volumes of evidence in support of their safety and efficacy, what was once unimaginable has become something close to a moral duty. Some people still resist vaccinations, of course, but they do so knowing that they are taking a moral risk: that their decision could impose costs on their child and the children of others. Consequently, there is a moral dimension to their choice that would have been historically unfathomable.

Accelerating change ramps up the problem of moral contextuality. If our technological environment is rapidly changing, it’s hard to offer concrete guidance and moral education to people about what they ought to do. They may face challenges and have powers that are beyond our ability to predict. This is something that most historical schools of moral thought did not envisage. As Vallor notes:

The founders of the most enduring classical traditions of ethics — Plato, Aristotle, Aquinas, Confucius, the Buddha — had the luxury of assuming that the practical conditions under which they and their cohorts lived would be, if not wholly static, at least relatively stable…the safest bet for a moral sage of premodern times would be that he, his fellows, and their children would confront essentially similar moral opportunities and challenges over the course of their lives. 
(Vallor 2016, 6)

All of this suggests that the following argument is worthy of our consideration:


  • (1) In order to provide practical and useful moral guidance to ourselves and our cohorts, we must be able to predict and understand the moral context in which we will operate.
  • (2) Accelerating technological change makes it extremely difficult to predict and understand the moral context in which we and our cohorts will operate.
  • (3) Therefore, accelerating technological change impedes our ability to provide practical and useful moral guidance.


Support for premise (1) derives from the preceding discussion of moral contextuality. If what we ought to do depends on the context, we need to know something about that context in order to provide practical guidance. Support for premise (2) derives from the preceding discussion of accelerating change. Admittedly, I haven’t provided a robust case for accelerating change, but I would suggest that there is something to the idea that is worth taking seriously. I also think the argument as a whole is worthy of serious scrutiny. The question is whether there is any solution to the problem it identifies.


2. The Failures of Abstract Normative Ethics
One possible solution lies in abstract normative principles. Students of moral philosophy will no doubt be suspicious of premise (1). They will know that modern ethical theories — in particular the theories associated with Immanuel Kant and proponents of utilitarianism — offer a type of moral guidance that makes no appeal to the context in which a moral choice must be made.

Consider Kant’s famous categorical imperative. There are various formulations of is, but the most popular and widely discussed is the ‘universalisation’ formulation (note: this is my wording, not Kant’s):

Categorical Imperative: You ought to only act on a maxim of the will that you can, at the same time, will as a universal maxim.

In other words, whenever you are about to do something ask yourself: would it be acceptable for everyone else, in this circumstance, to act as I am about to act? Are my choices universalisable? If not, then you are taking special exceptions for yourself and not acting in a moral way. Note how this principle is supposed to ‘float free’ of all contexts. It should work whatever fate may throw your way.
Consider also the basic principle of utilitarianism. Again, there are many formulations of utilitarianism, but they all involve something like this:


Utilitarian Principle: Act in a way that maximises the amount of pleasure (or some other property like ‘happiness’ or ‘desire satisfaction’) and minimises the amount pain, for the greatest number of people.

This principle also floats free of context. No matter what circumstance you find yourself in, you should always aim to maximise pleasure and minimise pain.

Vallor finds both of these solutions to the problem of accelerating change lacking. The issue is essentially the same for both. Although they may seem to be context-free, abstract moral principles, translating them from their abstract form into practical guidance requires far greater knowledge of moral context than initially seems to be the case. To know whether the rule you wish to follow is truly universalisable, you have be able to predict its consequences in multiple scenarios. But prediction of that sort is elusive in an era of rapid technological change. The same goes for figuring out how to maximise pleasure and minimise pain. This has been notoriously difficult for utilitarians given the complex causal relationships between acts and consequences. This was true even before the era of accelerating technological change. It will hardly be better in it.

For what it is worth, I think Vallor is correct in this assessment. Although abstract moral principles might seem like a solution to the problem of accelerating change, they falter in practice. That said, I think there is some value to the abstraction. Having a general rule of thumb that can apply to all contexts can be a useful starting point. We are always going to find ourselves in new situations and new contexts, irrespective of changes to our technologies. In those contexts we will have to work with the moral resources we have. I may walk into a new context and not know what choice is universalisable or likely to maximise pleasure, but I can at least know what sorts of evidence I should seek out to inform my choice.


3. The Virtue Ethical Solution


Vallor favours a different solution to the problem of accelerating change. She argues that instead of finding solace in abstract moral principles, we should look to the great virtue ethical traditions of the past. These are the traditions associated with Aristotle, Confucius and the Buddha. These traditions emphasise moral character, not moral principles. The goal of moral education, according to these traditions, is to train people to develop virtuous character traits that will enable them to skilfully navigate the ethical challenges that life throws their way.

Why is this a compelling solution to the problem of accelerating change? An analogy might help. As a university lecturer in the 21st century, I am very aware of the challenge of educating students for the future. The common view of higher education is that it is about conveying information. A lecturer stands at a lectern and tries to transfer his/her notes into the minds of the students. The students learn specific propositions, theories and facts that they later regurgitate in exams and, if we are lucky, in their professional lives. The problem with this common view is that it seems ill-equipped to deal with the challenges of the modern world. The information that I have in my notes will soon be outdated. For example, if I am teaching students about the law, I have to be cognisant of the fact that the rules and cases that I am explaining to them today may be overturned or reformed in the future. When the students step out into the professional world, they will have to cope with these new laws — ones they haven’t learned about in the course of their education.

So education cannot simply be an information-dump. It wouldn’t be very useful if it were. This is why there is such an emphasis on ‘skills-based’ education in universities today. The goal of education should not be get students to learn facts and propositions, but to develop skills that will enable them to handle new information and knowledge in the future. The skill of critical thinking is probably foremost amongst the skills that universities try to cultivate among their students. Most course descriptions nowadays suggest that critical thinking is a key learning objective of college education. As I understand it, this skill is supposed to enable students to critically assess and evaluate any kind of information, argument, theory or policy that might come their way. The successful critical thinker is, consequently, capable of facing the challenges of a changing world.

The goal of virtue ethics is much the same. Virtue ethical traditions try to cultivate moral skills among their adherents. The virtuous person doesn’t just learn a list of rules and regulations that they slavishly follow in all circumstances, they cultivate an ability to critically reflect upon new moral challenges and judge for themselves what the best moral solution might be. This may require casting off the principles that once seemed sensible. As Vallor puts it:

Moral expertise thus entails a kind of knowledge extending well beyond a cognitive grasp of rules and principles to include emotional and social intelligence: keen awareness of the motivations, feelings, beliefs, and desires of others; a sensitivity to the morally salient features of particular situations; and a creative knack for devising appropriate practical responses to those situations, especially where they involve novel or dynamically unstable circumstances. 
(Vallor 2016, 26)

The claim then is that cultivating moral expertise is the ideal way in which to provide moral guidance in an era of accelerating change:

[Ask yourself] which practical strategy is more likely to serve humans best in dealing with [the] unprecedented moral questions [raised by technological advances]: a stronger commitment to adhere strictly to fixed rules and moral principles (whether Kantian or utilitarian)? Or stronger and more widely cultivated habits of moral virtue, guided by excellence in practical and context-adaptive moral reasoning? 
(Vallor 2016, 27)

This is a direct challenge to premise (1) of the argument from accelerating change. The claim is that we do not need to know the particulars of every moral choice we might face in the future to provide moral guidance to ourselves and our cohorts. We just need to develop the context-adaptive skill of moral expertise.


4. Criticisms and Concerns
Of course, the devil is in the detail. Vallor’s book is an attempt to map out and defend exactly what this skill of moral expertise might look like in an era of accelerating technological change. As already noted, I haven’t read the whole book. Nevertheless, I have some initial concerns about the virtue ethical solution that I want to highlight. I know that Vallor is aware of most of these, so hopefully they will addressed later on.

The first is the problem of parochiality. Prima facie, virtue ethics seems like an odd place to find solace in a time of technological change. The leading virtue ethical traditions are firmly grounded in the parochial concerns of now-dead civilisations: Ancient Greece (Aristotle), China (Confucius) and India (the Buddha). Indeed, Vallor herself acknowledge this, as is clear from the quote I provided earlier on about the luxury these iconic figures had in assuming that things would be roughly the same in the future.

Vallor tries to solve this problem in two ways. First, she tries to argue that there is a ‘thin’ core of shared commitments across all of the leading virtue ethical traditions. This core of commitments can be divorced, to some extent, from the parochial historical concerns of Ancient Greece, China and India. These commitments include: (i) a belief in flourishing as the highest ideal of human existence; (ii) a belief in virtues as character traits shared by certain exemplary figures; (iii) a belief that there is practical path to the cultivation of moral expertise; and (iv) some conception of human nature that is relatively fixed and stable. Second, she tries to identify virtues that are particularly relevant to our era. She does this by adopting Alisdair MacIntyre’s theory of virtues, which argues that virtues are always tied to the inherent goods of particular social practices. She then tries to argue that there is a set of goods inherent to modern technosocial practice. These goods are mainly tied to our growing global interconnectedness, and the consequent need to cultivate global wisdom, community and justice.

Both of these attempts to overcome the problem of parochiality are interesting and worthy of greater consideration. I hope to examine them in more depth at a later stage. I want to fixate, however, on one aspect of Vallor’s ‘thin’ theory of virtues because I think it reveals another important problem: the problem of human nature. As she notes, all virtue ethical theories share the idea that the goal of moral practice should be to promote human flourishing. They also share the belief that the path to this goal is determined by some conception of human nature. It is because there is a relatively stable and fixed human nature that we can meaningfully identify certain practices and traits as conducive to human flourishing. Vallor accepts that the ‘thick’ details of this theory will vary between the traditions, but also seems committed to the notion that there is some stable core to what is conducive to human flourishing. For example, when commenting on the need to develop social bonds and a sense of community, she says:

Humans in all times and places have needed cooperative social bonds of family, friendship, and community in order to flourish; this is simply a fact of our biological and environmental dependence. 
(Vallor 2016, 50)

This quote shows, I think, how the virtue ethical solution to the problem of accelerating change is to swap abstract and fixed principles for an abstract and fixed human nature. I think this is problematic.

I’m certainly not a denier of human nature. I think there probably are some stable and relatively fixed aspects of human nature, at least for humans as they are currently constituted. But that’s the crucial point. One of the biggest moral challenges posed by technological development is the fact that it is no longer just the environment around us that is changing. Technologies of human-machine integration or human enhancement threaten the historical stability of our ‘biological and environmental dependence’. Two potential technological developments seem to pose a particular challenge in this regard:

The Hyperagency Challenge: This arises from the creation of enhancement technologies that allow us to readily control and manipulate the constitutive aspects of our agency, i.e. our beliefs, desires, moods, motivations and dispositions. If all these things can be erased, changed, overridden, and altered, the idea that there is an internal, fixed aspect of our nature that serve as a moral guide becomes more questionable. I’ve written two papers about this challenge in the past, so I won’t say anymore about it here.

The Hivemind Challenge: This arises from the creation of technologies that blur the boundary between human and machine, and enable greater surveillance and interconnectedness of human beings. As I’ve noted in the past, such technologies could, in extreme forms, erode the existence of a stable, individual moral agent. Since most virtue ethical traditions (even the more communitarian ones) assume that the target of moral education is the individual agent, this challenge also calls into question the utility of virtue ethics as a guide to our changing times. Indeed, if we do become a global hivemind, the idea of ‘human’ nature would seem to go out the window.

I don’t know how seriously we should take these challenges. You could argue that the technologies that will make them possible are hypothetical and outlandish — that for the time being we will have a relatively stable nature that can serve as the basis for a technomoral virtue ethics. But if the relevant technologies could be realised, it might call into question the long-term sustainability of a virtue ethical solution to the problem of accelerating change.

The final problem I have is the problem of calibration. This is a more philosophical worry. It is a worry about the philosophical coherence of virtue ethics itself. The claim made by many virtue ethicists is that moral expertise is a skill that is cultivated through practice. The moral expert is someone who can learn from their experiences and the experiences of others, and use their judgment to hone their ability to ‘see’ what is morally required in new contexts. What has never been quite clear to me is how the moral expert is supposed to calibrate their moral sensibility. How do they know that they are honing their skill in the right direction? How can they meaningfully learn from their experiences without some standards against which to evaluate what they have done? I’m not exactly sure what the answer is, but it seems to me that it will require some appeal to abstract moral standards. The budding moral expert will have to assess their actions by appealing to standards such as the general utility of pleasure over pain, the desirability of individual autonomy and control, the typical superiority of impartial universal rules over partial and parochial ones. In sum, it seems like the contrast between virtue ethics and abstract moral principles and standards may not be that sharp in practice. We may need both if we are going to successfully navigate these changing times.





Posted by in Uncategorized

Permalink

“Devastating:” Authors retract paper in Nature journal upon discovering error

Several years ago, Chris Dames thought he had made an exciting discovery, a “secret sauce” that would allow him to design a device using a novel mechanism. In a 2014 Nature Communications paper, Dames—who works at the University of California at Berkeley—and his team described the first experimental results for the device, a photon thermal […]

The post “Devastating:” Authors retract paper in Nature journal upon discovering error appeared first on Retraction Watch.

Visual narrative of six asylum seekers

We often visualize migration and people movement as lines that go from point A to point B. While this can be interesting for overall trends, we lose something about the individuals leaving their home and traveling in hopes to find something some better. Federica Fragapane, in collaboration with Alex Piacentini, focuses in on six people leaving point A for point B to tell their stories.

Tags: ,

Posted by in Uncategorized

Permalink

Arguments from authority, and the Cladistic Ghost, in historical linguistics


Arguments from authority play an important role in our daily lives and our societies. In political discussions, we often point to the opinion of trusted authorities if we do not know enough about the matter at hand. In medicine, favorable opinions by respected authorities function as one of four levels of evidence (admittedly, the lowest) to judge the strength of a medicament. In advertising, the (at times doubtful) authority of celebrities is used to convince us that a certain product will change our lives.

Arguments from authority are useful, since they allow us to have an opinion without fully understanding it. Given the ever-increasing complexity of the world in which we live, we could not do without them. We need to build on the opinions and conclusions of others in order to construct our personal little realm of convictions and insights. This is specifically important for scientific research, since it is based on a huge network of trust in the correctness of previous studies which no single researcher could check in a lifetime.

Arguments from authority are, however, also dangerous if we blindly trust them without critical evaluation. To err is human, and there is no guarantee that the analysis of our favorite authorities is always error proof. For example, famous linguists, such as Ferdinand de Saussure(1857-1913) or Antoine Meillet (1866-1936), revolutionized the field of historical linguistics, and their theories had a huge impact on the way we compare languages today. Nevertheless, this does not mean that they were right in all their theories and analyses, and we should never trust any theory or methodological principle only because it was proposed by Meillet or Saussure.

Since people tend to avoid asking why their authority came to a certain conclusion, arguments of authority can be easily abused. In the extreme, this may accumulate in totalitarian societies, or societies ruled by religious fanatism. To a smaller degree, we can also find this totalitarian attitude in science, where researchers may end up blindly trusting the theory of a certain authority without further critically investigating it.

The comparative method

The authority in this context does not necessarily need to be a real person, it can also be a theory or a certain methodology. The financial crisis from 2008 can be taken as an example of a methodology, namely classical "economic forecasting", that turned out to be trusted much more than it deserved. In historical linguistics, we have a similar quasi-religious attitude towards our traditional comparative method (see Weiss 2014 for an overview), which we use in order to compare languages. This "method" is in fact no method at all, but rather a huge bunch of techniques by which linguists have been comparing and reconstructing languages during the past 200 years. These include the detection of cognate or "homologous" words across languages, and the inference of regular sound correspondence patterns (which I discussed in a blog from October last year), but also the reconstruction of sounds and words of ancestral languages not attested in written records, and the inference of the phylogeny of a given language family.

In all of these matters, the comparative method enjoys a quasi-religious authority in historical linguistics. Saying that they do not follow the comparative method in their work is among the worst things you can say to historical linguists. It hurts. We are conditioned from when we were small to feel this pain. This is all the more surprising, given that scholars rarely agree on the specifics of the methodology, as one can see from the table below, where I compare the key tasks that different authors attribute to the "method" in the literature. I think one can easily see that there is not much of an overlap, nor a pattern.

Varying accounts on the "comparative methods" in the linguistic literature

It is difficult to tell how this attitude evolved. The foundations of the comparative method go back to the early work of scholars in the 19th century, who managed to demonstrate the genealogical relationship of the Indo-European languages. Already in these early times, we can find hints regarding the "methodology" of "comparative grammar" (see for example Atkinson 1875), but judging from the literature I have read, it seems that it was not before the early 20th century that people began to introduce the techniques for historical language comparison as a methodological framework.

How this framework became the framework for language comparison, although it was never really established as such, is even less clear to me. At some point the linguistic world (which was always characterized by aggressive battles among colleagues, which were fought in the open in numerous publications) decided that the numerous techniques for historical language comparison which turned out to be the most successful ones up to that point are a specific method, and that this specific method was so extremely well established that no alternative approach could ever compete with it.

Biologists, who have experienced drastic methodological changes during the last decades, may wonder how scientists could believe that any practice, theory, or method is everlasting, untouchable and infallible. In fact, the comparative method in historical linguistics is always changing, since it is a label rather than a true framework with fixed rules. Our insights into various aspects of language change is constantly increasing, and as a result, the way we practice the comparative method is also improving. As a result, we keep using the same label, but the product we sell is different from the one we sold decades ago. Historical linguistics are, however, very conservative regarding the authorities they trust, and our field was always very skeptical regarding any new methodologies which were proposed.

Morris Swadesh (1909-1967), for example, proposed a quantitative approach to infer divergence dates of language pairs (Swadesh 1950 and later), which was immediately refuted, right after he proposed it (Hoijer 1956, Bergsland and Vogt 1962). Swadesh's idea to assume constant rates of lexical change was surely problematic, but his general idea of looking at lexical change from the perspective of a fixed set of meanings was very creative in that time, and it has given rise to many interesting investigations (see, among others, Haspelmath and Tadmor 2009). As a result, quantitative work was largely disregarded in the following decades. Not many people payed any attention to David Sankoff's (1969)PhD thesis, in which he tried to develop improved models of lexical change in order to infer language phylogenies, which is probably the reason why Sankoff later turned to biology, where his work received the appreciation it deserved.

Shared innovations

Since the beginning of the second millennium, quantitative studies have enjoyed a new popularity in historical linguistics, as can be seen in the numerous papers that have been devoted to automatically inferred phylogenies (see Gray and Atkinson 2003 and passim). The field has begun to accept these methods as additional tools to provide an understanding of how our languages evolved into their current shape. But scholars tend to contrast these new techniques sharply with the "classical approaches", namely the different modules of the comparative method. Many scholars also still assume that the only valid technique by which phylogenies (be it trees or networks) can be inferred is to identify shared innovations in the languages under investigation (Donohue et al. 2012, François 2014).

The idea of shared innovations was first proposed by Brugmann (1884), and has its direct counterpart in Hennig's (1950) framework of cladistics. In a later book of Brugmann, we find the following passage on shared innovations (or synapomorphies in Hennig's terminology):
The only thing that can shed light on the relation among the individual language branches [...] are the specific correspondences between two or more of them, the innovations, by which each time certain language branches have advanced in comparison with other branches in their development. (Brugmann 1967[1886]:24, my translation)
Unfortunately, not many people seem to have read Brugmann's original text in full. Brugmann says that subgrouping requires the identification of shared innovative traits (as opposed to shared retentions), but he remains skeptical about whether this can be done in a satisfying way, since we often do not know whether certain traits developed independently, were borrowed at later stages, or are simply being misidentified as being "shared". Brugmann's proposed solution to this is to claim that shared, potentially innovative traits, should be numerous enough to reduce the possibility of chance.

While biology has long since abandoned the cladistic idea, turning instead to quantitative (mostly stochastic) approaches in phylogenetic reconstruction, linguists are surprisingly stubborn in this regard. It is beyond question that those uniquely shared traits among languages that are unlikely to have evolved by chance or language contact are good proxies for subgrouping. But they are often very hard to identify, and this is probably also the reason why our understanding about the phylogeny of the Indo-European language family has not improved much during the past 100 years. In situations where we lack any striking evidence, quantitative approaches may as well be used to infer potentially innovated traits, and if we do a better job in listing these cases (current software, which was designed by biologists, is not really helpful in logging all decisions and inferences that were made by the algorithms), we could profit a lot when turning to computer-assisted frameworks in which experts thoroughly evaluate the inferences which were made by the automatic approaches in order to generate new hypotheses and improve our understanding of our language's past.

A further problem with cladistics is that scholars often use the term shared innovation for inferences, while the cladistic toolkit and the reason why Brugmann and Hennig thought that shared innovations are needed for subgrouping rests on the assumption that one knows the true evolutionary history (DeLaet 2005: 85). Since the true evolutionary history is a tree in the cladistic sense, an innovation can only be identified if one knows the tree. This means, however, that one cannot use the innovations to infer the tree (if it has to be known in advance). What scholars thus mean when talking about shared innovations in linguistics are potentially shared innovations, that is, characters, which are diagnostic of subgrouping.

Conclusions

Given how quickly science evolves and how non-permanent our knowledge and our methodologies are, I would never claim that the new quantitative approaches are the only way to deal with trees or networks in historical linguistics. The last word on this debate has not yet been spoken, and while I see many points critically, there are also many points for concrete improvement (List 2016). But I see very clearly that our tendency as historical linguists to take the comparative method as the only authoritative way to arrive at a valid subgrouping is not leading us anywhere.

Do computational approaches really switch off the light which illuminates classical historical linguistics?

In a recent review, Stefan Georg, an expert on Altaic languages, writes that the recent computational approaches to phylogenetic reconstruction in historical linguistics "switch out the light which has illuminated Indo-European linguistics for generations (by switching on some computers)", and that they "reduce this discipline to the pre-modern guesswork stage [...] in the belief that all that processing power can replace the available knowledge about these languages [...] and will produce ‘results’ which are worth the paper they are printed on" (Georg 2017: 372, footnote). It seems to me, that, if a discipline has been enlightened too much by its blind trust in authorities, it is not the worst idea to switch off the light once in a while.

References
  • Anttila, R. (1972): An introduction to historical and comparative linguistics. Macmillan: New York.
  • Atkinson, R. (1875): Comparative grammar of the Dravidian languages. Hermathena 2.3. 60-106.
  • Bergsland, K. and H. Vogt (1962): On the validity of glottochronology. Current Anthropology 3.2. 115-153.
  • Brugmann, K. (1884): Zur Frage nach den Verwandtschaftsverhältnissen der indogermanischen Sprachen [Questions regarding the closer relationship of the Indo-European languages]. Internationale Zeischrift für allgemeine Sprachewissenschaft 1. 228-256.
  • Bußmann, H. (2002): Lexikon der Sprachwissenschaft . Kröner: Stuttgart.
  • De Laet, J. (2005): Parsimony and the problem of inapplicables in sequence data. In: Albert, V. (ed.): Parsimony, phylogeny, and genomics. Oxford University Press: Oxford. 81-116.
  • Donohue, M., T. Denham, and S. Oppenheimer (2012): New methodologies for historical linguistics? Calibrating a lexicon-based methodology for diffusion vs. subgrouping. Diachronica 29.4. 505–522.
  • Fleischhauer, J. (2009): A Phylogenetic Interpretation of the Comparative Method. Journal of Language Relationship 2. 115-138.
  • Fox, A. (1995): Linguistic reconstruction. An introduction to theory and method. Oxford University Press: Oxford.
  • François, A. (2014): Trees, waves and linkages: models of language diversification. In: Bowern, C. and B. Evans (eds.): The Routledge handbook of historical linguistics. Routledge: 161-189.
  • Georg, S. (2017): The Role of Paradigmatic Morphology in Historical, Areal and Genealogical Linguistics. Journal of Language Contact 10. 353-381.
  • Glück, H. (2000): Metzler-Lexikon Sprache . Metzler: Stuttgart.
  • Gray, R. and Q. Atkinson (2003): Language-tree divergence times support the Anatolian theory of Indo-European origin. Nature 426.6965. 435-439.
  • Harrison, S. (2003): On the limits of the comparative method. In: Joseph, B. and R. Janda (eds.): The handbook of historical linguistics. Blackwell: Malden and Oxford and Melbourne and Berlin. 213-243.
  • Haspelmath, M. and U. Tadmor (2009): The Loanword Typology project and the World Loanword Database. In: Haspelmath, M. and U. Tadmor (eds.): Loanwords in the world’s languages. de Gruyter: Berlin and New York. 1-34.
  • Hennig, W. (1950): Grundzüge einer Theorie der phylogenetischen Systematik. Deutscher Zentralverlag: Berlin.
  • Hoenigswald, H. (1960): Phonetic similarity in internal reconstruction. Language 36.2. 191-192.
  • Hoijer, H. (1956): Lexicostatistics. A critique. Language 32.1. 49-60.
  • Jarceva, V. (1990): . Sovetskaja Enciklopedija: Moscow.
  • Klimov, G. (1990): Osnovy lingvističeskoj komparativistiki [Foundations of comparative linguistics]. Nauka: Moscow.
  • Lehmann, W. (1969): Einführung in die historische Linguistik. Carl Winter:
  • List, J.-M. (2016): Beyond cognacy: Historical relations between words and their implication for phylogenetic reconstruction. Journal of Language Evolution 1.2. 119-136.
  • Makaev, E. (1977): Obščaja teorija sravnitel’nogo jazykoznanija [Common theory of comparative linguistics]. Nauka: Moscow.
  • Matthews, P. (1997): Oxford concise dictionary of linguistics . Oxford University Press: Oxford.
  • Rankin, R. (2003): The comparative method. In: Joseph, B. and R. Janda (eds.): The handbook of historical linguistics. Blackwell: Malden and Oxford and Melbourne and Berlin.
  • Sankoff, D. (1969): Historical linguistics as stochastic process . . McGill University: Montreal.
  • Weiss, M. (2014): The comparative method. In: Bowern, C. and N. Evans (eds.): The Routledge Handbook of Historical Linguistics. Routledge: New York. 127-145.

Wind, Warm Water Revved Up Melting Antarctic Glaciers

A rock outcropping on Fleming Glacier.

The flow speeds of Antarctica's fastest-accelerating glaciers in 2008-2014 likely increased because of warm water blown into their bay by La Niña and another climate pattern.



What you need to know about harmful algal blooms

What you need to know about harmful algal blooms | www.APHLblog.org

By Julianne Murphy, intern, Environmental Health

Warm weather brings nature walks, picnics and sunny days by the shore, but it can also bring unwanted changes to your favorite beach. As the temperature rises, lake and ocean waters can turn from blue to mossy green as algae proliferates in unsightly and potentially harmful algal blooms.

What are harmful algal blooms?

Algae are plant-like organisms of one or more cells that use sunlight to make food. Together they can form colonies called algal blooms in both marine and freshwater systems. Some of these algal blooms are hazardous to health, but not all algal blooms are harmful.

Harmful algal blooms may release toxins at concentrations unsafe to humans and animals and may drastically reduce oxygen available to aquatic life. In fresh water bodies, cyanobacteria, aka “blue-green algae,” can produce dangerous cyanotoxins; in saltwater or brackish water, acid-generating plankton – dinoflagellates and diatoms – can pose a health threat.

Should I be concerned about algal blooms?

Algal blooms can pose a risk for human and animal health. People and animals can become ill through eating, drinking, breathing or having direct skin contact with harmful algal blooms and their toxins. Illnesses vary based on the exposure, toxins and toxin levels. Public health and environmental laboratories test samples from harmful algal blooms to confirm the presence and level of toxicity. Remember, not all algal blooms are harmful.

How are public health officials responding to the increase in algal bloom events?

As climate change events amplify conditions favorable to algal blooms, public health scientists are studying when and where associated illnesses are occurring and how to mitigate the effects of exposure. Their efforts have led to increased laboratory testing and electronic surveillance measures at the state and federal level.

For example, public health and environmental officials in Alaska have been tracking and testing harmful algal blooms. The Alaska Harmful Algal Bloom Network, a collaboration of the Alaska Department of Health and Social Services (DHSS) and regional monitoring programs, analyzes fish kills, unusual animal behaviors and other related phenomenon to provide early warning of developing coastal marine blooms. DHSS scientists analyze human specimens for illnesses associated with harmful algal blooms, such as paralytic shellfish poisoning (PSP) caused by saxitoxins. PSP is a potentially fatal poisoning with no treatment except supportive care. Samples from symptomatic patients are forwarded to the Centers for Disease Control and Prevention (CDC) for confirmatory testing as needed. Testing of asymptomatic individuals may be included in future studies.

In addition, Alaska Department of Environmental Conservation (DEC) laboratories test marine shellfish meat samples protect public health and safety as well as for regulatory purposes, illness investigations and non-commercial shellfish upon request. This monitoring literally saves lives.

David Verbrugge, chief chemist at the DHSS Division of Public Health, explains the value of Alaska’s testing of harmful algal blooms, “[Laboratory analysis] helps us to understand the nature of PSP exposures: frequency of occurrence, confirmation when lacking meals to test, and the presence or absence of toxins in asymptomatic co-exposed groups. It also allows us to let people know what they are eating before they eat it.”

Is the CDC involved in testing and surveillance for harmful algal blooms?

Yes, only for freshwater. In 2016, CDC created the One Health Harmful Algal Bloom System to provide a voluntary, electronic reporting system for states, federal agencies and their partners. Using the system, which integrates human, animal and environmental health data using a One Health approach, public health departments and their environmental and animal health partners can report bloom events, and human and animal cases of associated illness. Members of the public may report a bloom event or a case of human or animal illness to the One Health system by contacting their local or state health department.

What is the outlook for future testing and surveillance of harmful algal blooms?

As climatic conditions become more favorable to development of harmful algal blooms, state and local health departments will have to ramp up surveillance and testing to protect public health and to preserve local revenue from beaches. These actions will come with a price tag, requiring action at all levels of government. Resources can be leveraged through collaboration to research and expand clinical testing capacity for these persistent health threats.

Learn More:

The post What you need to know about harmful algal blooms appeared first on APHL Lab Blog.

Another retraction hits high-profile food researcher under fire

It’s been a rough year for Brian Wansink. Last year, the prominent food researcher posted a blog praising a student for her productivity in his lab. But when Wansink described his methods, readers became concerned that the lab was using improper research techniques to generate more publications. Earlier this year, researchers posted an analysis of […]

The post Another retraction hits high-profile food researcher under fire appeared first on Retraction Watch.

Article defending colonialism draws rebuke, journal defends choice to publish

Facing a volley of criticism for publishing an essay that called for a return to colonialism, a journal editor has defended his decision to print the article. “The Case for Colonialism,” published Sept. 8 in Third World Quarterly (TWQ), was written by Bruce Gilley, a professor of political science at Portland State University. For an […]

The post Article defending colonialism draws rebuke, journal defends choice to publish appeared first on Retraction Watch.