Australia as 100 people

Opting for the force-directed clusters route, Catherine Hanrahan and Simon Elvery for ABC News visualized Australian demographics at the scale of 100 people. Each dot is a person, and as you scroll, you get different breakdowns. It’s percentages, but treating each percentage point as a person makes it more relatable.

See also: Demographics in a world of 100.

Tags: ,

STAT Proves Not Resistant To Antibiotic Tropes

Tuesday's Boston Globe carried a piece originating from STAT news on an interesting natural product antibiotic, pleuromutilin.  A research group recently published a new total synthesis of this fungal terpene, an advance which promises to enable greater medicinal chemistry around the molecule.  That part is cool.  Unfortunately, when it gets to the biology of pleuromutilin the piece by Eric Boodman completely spits the bit, trotting out some horribly inaccurate tropes.

Read more »

Posted by in Uncategorized


The Tell-Tale Brain: The Effect of Predictive Brain Implants on Autonomy

What if your brain could talk to you?

’That’s a silly question’, I hear you say, ‘My brain already talks to me.’

To the best of our current knowledge, the mind is the brain, and the mind is always talking. Indeed, it’s where all the talking gets started. We have voices in our heads — a cacophony of different thoughts, interests, fears, and hopes— vying for attention. We live in a stream of self-talk. We build up detailed narratives about our lives. We are always spinning yarns, telling stories.

This is all probably true. But our brains don’t tell us everything. The stream of self-talk in which we are situated (or should that be ‘by which we are constituted’?) sits atop a vast, churning sea of sub-conscious neurological activity. We operate on a ‘need to know’ basis and we don’t need to know an awful lot. Many times we sail through this sea of activity unperturbed. But sometimes we don’t. Sometimes what is happening beneath the surface is deeply problematic, hurtful to ourselves and to others, and occasionally catastrophic. Sometimes our brains only send us warning signals when we are about to get washed up on the rocks.

Take epilepsy as an example. The brains of those who suffer from epilepsy occasionally enter into cycles of excessive synchronous neuronal activity. This results in seizures (sometimes referred to as ‘fits’), which can lead to blackouts and severe convulsions. Sometimes these seizures are preceded by warning signs (e.g. visual auras), but many times they are not, and even when they are, the signs often come too late in the day, well after anything can be done to avert their negative consequences. What if the brains of epileptics could tell them something in advance? What if certain patterns of neuronal activity were predictive of the likelihood of a seizure and what if this information could be provided to epileptic patients in time for them to avert a seizure?

That’s the promise of a new breed of predictive brain implants. These are devices (sets of electrodes) that are implanted into the brains of epileptics and, through statistical learning algorithms, used to predict the likelihood of seizures from patterns of neuronal activity. These devices are already being trialled on epileptic patients and proving successful. Some people are enthusiastic about their potential to help those who suffer from the negative effects of this condition and, as you might expect, there is much speculation about other use cases for this technology. For example, could predictive brain implants tell whether someone is going to go into a violent rage? Could this knowledge prove useful in crime prevention and mitigation?

These are important questions, but before we get too carried away with the technical possibilities (or impossibilities) it’s worth asking some general conceptual and ethical questions. Using predictive brain implants to control and regulate behaviour might seem a little ‘Clockwork Orange’-y at a first glance. Is this technology going to be great boon to individual liberty, freeing us from the shackles of unwanted neural activity? Or is it going to be a technique of mind control - the ultimate infringement of human autonomy? These are some of the questions taken up in Frederic Gilbert’s paper ‘A Threat to Autonomy? The Intrusion of Predictive Brain Implants’. I want to offer some of my own thoughts on the issue in the remainder of this post.

1. The Three Types of Predictive Brain Implants

Let’s start by clarifying the technology of interest. Brain implants of one sort of another have been around for quite some time. So-called ‘deep brain stimulators’ have been used to treat patients with neurological and psychiatric conditions for a couple of decades. The most common use is for patients with Parkinson’s disease, who are often given brain implants that help to minimise or eliminate the tremors associated with their disease. It is thought that over 100,000 patients worldwide have been implanted with this technology.

Predictive brain implants (PBIs) are simply variations on this technology. Electrodes are implanted in the brains of patients. These electrodes record and analyse the electrical signals generated by the brain. They then use this data to learn and predict when a neuronal event (such as a seizure) is going to take place. At the moment, the technology is its infancy, essentially just providing patients with warning signals, but we can easily imagine developments in the technology, perhaps achieved by combining it with other technologies. Gilbert suggests that there are three possible forms for predictive brain implants:

Purely Predictive: These are PBIs that simply provide patients with predictive information about future neuronal events. Given the kinds of events that are likely to be targets for PBIs, this information will probably always have a ‘warning signal’-like quality.

Advisory: These are PBIs that provide predictions about future neuronal events, as well as advice to patients about how to avert/manipulate those neuronal events. For example, in the case of epilepsy, a patient could be advised to take a particular medication or engage in some preventive behaviour. The type of advice that could be given could be quite elaborate, if the PBI is combined with other information processing technologies.

Automated: These are PBIs that predict neuronal events and then deliver some treatment/intervention that will avert or manipulate that event. They will do this without first warning or seeking the patient’s consent. This might sound strange, but it is not that strange. There are a number of automated-treatment devices in existence already, such as heart pacemakers or insulin pumps, and they regulate biochemical processes without any meaningful ongoing input from the patient.

The boundary between the first two categories is quite blurry. Given that PBIs necessarily select specific neuronal events from the whirlwind of ongoing neuronal events for prediction, and given that they will probably feed this selective information to patients in the form of warning signals, the predictions are likely to carry some implicit advice. Nevertheless, the type of advice provided by advisory PBIs could, as mentioned above, be more or less elaborate. It could range from the very general ‘Warning: you ought to do something to avert a seizure’ to the more specific ‘warning: you ought to take medication X, which can be purchased at store Y, which is five minutes from your present location’.

The different types of PBI could have very different impacts on personal autonomy. At a first glance, it seems like an automated PBI would put more pressure on individual autonomy than a purely predictive PBI. Indeed, it seems like a purely predictive or advisory PBI could actually benefit autonomy, but that first glance might be misleading. We need a more precise characterisation of autonomy, and a more detailed analysis of the different ways in which a PBI could impact upon autonomy, before we can reach any firm conclusions.

2. The Nature of Autonomy
Many books and articles have been written on the concept of ‘autonomy’. Generations of philosophers have painstakingly identified necessary and sufficient conditions for its attainment, subjected those conditions to revision and critique, scrapped their original accounts, started again, given up and argued that the concept is devoid of meaning, and so on. I cannot hope to do justice to the richness of the literature on this topic here. Still, it’s important to have at least a rough and ready conception of what autonomy is and the most general (and hopefully least contentious) conditions needed for its attainment.

I have said this before, but I like Joseph Raz’s general account. Like most people, he thinks that an autonomous agent is one who is, in some meaningful sense, the author of their own lives. In order for this to happen, he says that three conditions must be met:

Rationality condition: The agent must have goals/ends and must be able to use their reason to plan the means to achieve those goals/ends.

Optionality condition: The agent must have an adequate range of options from which to choose their goals and their means.

Independence condition: The agent must be free from external coercion and manipulation when choosing and exercising their rationality.

I have mentioned before that you can view these as ‘threshold conditions’, i.e. conditions that simply have to be met in order for an agent to be autonomous, or you can have a slightly more complex view, taking them to define a three dimensional space in which autonomy resides. In other words, you can argue that an agent can have more or less rationality, more or less optionality, and more or less independence. The conditions are satisfied in degrees. This means that agents can be more or less autonomous, and the same overall level of autonomy can be achieved through different combinations of the relevant degrees of satisfaction of the conditions. That’s the view I tend to favour. I think there possibly is a minimum threshold for each condition that must be satisfied in order for an agent to count as autonomous, but I suspect that the cases in which this threshold is not met are pretty stark. The more complicated cases, and the ones that really keep us up at night, arise when someone scores high on one of the conditions but low on another. Are they autonomous or not? There may not be a simple ‘yes’ or ‘no’ answer to that question.

Anyway, using the three conditions we can formulate the following ‘autonomy principle’ or ‘autonomy test’:

Autonomy principle: An agent’s actions are more or less autonomous to the extent that they meet the (i) rationality condition; (ii) optionality condition and (iii) independence condition.

We can then use this principle to determine whether, and if, PBIs interfere with or undermine an agent’s autonomy.

What would such an analysis reveal? Well, looking first to the rationality condition, it is difficult to see how a PBI could undermine this. Unless they malfunction or are misdirected, it is unlikely that a PBI would undermine our capacity for rational thought. Indeed, the contrary would seem to be the case. You could argue that a condition such as epilepsy is a disruption of rationality. Someone in the grip of a seizure is no longer capable of rational thought. Consequently, using the PBI to avert or prevent their seizure might actually increase, not decrease their rationality.

Turning to the other two conditions, things become a little more unclear. The extent to which autonomy is enhanced or undermined depends on the type of PBI being used.

3. Do advisory PBIs support or undermine autonomy?
Let’s start by looking at predictive/advisory PBIs. I’ll treat these as a pair since, as I stated earlier on, a purely predictive PBI probably does carry some implicit advice. That said, the advice would be different in character. The purely predictive PBI will provide a vague, implied piece of advice (“do something to stop x”). The advisory PBI could provide very detailed, precise advice, perhaps based on the latest medical evidence (“take medication x in ten minutes time and purchase it from vendor y”). Does this difference in detail and specification matter? Does it undermine or promote autonomy?

Consider this first in light of the optionality condition. On the one hand, you could argue that a vague and general bit of advice is better because it keeps more options open. It advises you to do something, but leaves it up to you exactly what that is. The more specific advice seems to narrow the range of choices, and this may seem to reduce the degree of optionality. That said, the effect here is probably quite slight. The more specific advice is not compelled or forced upon you (more on this in a moment), so you are arguably left in pretty much the same position as someone getting the more general advice, albeit with a little more knowledge. Furthermore, there is the widely-discussed ‘paradox of choice’ which suggests that having too many options can be a bad thing for autonomy because it leaves you paralysed in your decisions. Having your PBI specify an option might help you to break that paralysis. That said, this paradox of choice may not arise in the kinds of scenarios in which PBIs get deployed. The paradox of choice is best documented in relation to consumer behaviours and its not clear how similar this would be to decisions about which intervention to pick to avoid a neuronal event.

The independence condition is possibly more important. At a first glance, it seems pretty obvious that an advisory PBI does not undermine the independence condition. For one thing, the net effect of a PBI may be to increase your overall level of independence because it will make you less reliant on others to help you out and monitor your well-being. This is one thing Gilbert discusses in his paper on epileptic patients. He was actually involved with one of the first experimental trials of PBIs and interviewed some of patients who received them. One of the patients on the trial reported feeling an increased level of independence after getting the implant:

…the patient reported: “My family and I felt more at ease when I was out in the community [by myself], […] I didn’t need to rely on my family so much.” These descriptions are rather clear: with sustained surveillance by the implanted device, the patient experienced novel levels of independence and autonomy. 
(Gilbert 2015, 7)

In addition to that, the advisory PBI is merely providing you with suggestions: it does not force them upon you. You are not compelled to take the medication or follow the prescribed steps. This doesn’t involve manipulation or coercion in the sense usually discussed by philosophers of autonomy.

So things look pretty good for advisory PBIs on the independence front, right? Well, not so fast. There are three issues to bear in mind.

First, although the advice provided by the PBI may not be coercive right now, it could end up having a coercive quality. For example, it could be that following the advice provided by the PBI is a condition of health insurance: if you don’t follow the advice, you won’t be covered by your health insurance policy. That might lend a coercive air to the phenomenon.

Second, people may end up being pretty dependent on the PBI. People might not be inclined to second guess or question the advice provided, and may always go along with what it says. This might make them less resilient and less able to fend for themselves, which would undermine independence. We already encounter this phenomenon, of course. Many of us are already dependent on the advice provided to us by services like Google Maps. I don’t know you feel about that dependency. It doesn’t bother me most of the time, though there have been occasions on which I have lamented my overreliance on the technology. So if you think that dependency on Google Maps undermines autonomy, then you might think the same of an advisory PBI (and vice versa).

Third, and finally, the impact of an advisory PBI on independence, specifically, and autonomy, more generally, probably depends to a large extent the type of neuronal event it is being used to predict and manipulate. An epileptic on the cusp of a seizure is already in a state of severely compromised autonomy. They have limited options and limited independence in any event. The advisory PBI might impact negatively on those variables in moments just prior to the predicted seizure, but the net effect of following the advice (i.e. possibly avoiding the seizure) probably compensates for those momentary negative impacts. Things might be very different if the PBI was being used to predict whether you were about to go into a violent rage or engage in some other immoral behaviour. We don’t usually think of violence or immorality as diseases of autonomy so there may be no equivalent compensating effect. In other words, the negative impact on autonomy might be greater in these use-cases.

4. Do automated PBIs support or undermine autonomy?
Let’s turn finally to the impact of automated PBIs on autonomy. Recall, these are PBIs that predict neuronal events and use this information to automatically deliver some intervention to the patient that averts or otherwise manipulates those neuronal events. This means that the decisions made on foot of the prediction are not mediated through the patient’s conscious reasoning faculties; they are dictated by the machine itself (by its code/software). The patient might be informed of the decisions at some point, but this has no immediate impact on how those decisions get made.

This use of PBIs seems to be much more compromising of individual autonomy. After all, the automated PBI does not treat the patient as someone who’s input is relevant to ongoing decisions about medical treatment. The patient is definitely not given any options and they not even respected as independent autonomous agents. Consequently, the negative impact on autonomy seems clear.

But we have to be careful here. It is true that the patient with the automated PBI does not exercise any control over their treatment at the time that the treatment is delivered, but this is not to say they exercise no control at all. Presumably, the patient originally consented to having the PBI implanted in their brains. At that point in time, they were given options and were treated as independent autonomous agents. Furthermore, they may retain control over how the device works in the future. The type of treatment automatically delivered by the PBI could be reviewed over time, by the patient, in consultation with their medical team. During those reviews, the patient could once again exercise their autonomy over the device. You could, thus, view the use of the automated PBI as akin to a commitment contract or Ulysses contract. The patient is autonomously consenting to the use of the device as a way of increasing their level of autonomous control at all points in their lives. This may mean losing autonomy over certain discrete decisions, but gaining it in the long run.

Again, the type of neuronal event that the PBI is used to avert or manipulate would also seem crucial here. If it is a neuronal event that otherwise tends to compromise or undermine autonomy, then it seems very plausible to argue that use of the automated PBI does not undermine or compromise autonomy. After all, we don’t think that the diabetic has compromised their autonomy by using an automated insulin pump. But if it is a neuronal event that is associated with immorality and vice, we might feel rather different.

I should add that all of this assumes that PBIs will be used on a consent-basis. If we start compelling certain people to use them, the analysis becomes more complex. The burgeoning literature on neurointerventions in the criminal law would be useful for those who wish to pursue those issues.

5. Conclusion
That brings us to the end. In keeping with my earlier comments about the complex nature of autonomy, you’ll notice that I haven’t reached any firm conclusions about whether PBIs undermine or support autonomy. What I have said is that ‘it depends’. But I think I have gone beyond a mere platitude and argued that it depends on at least three things: (i) the modality of the PBI (general advisory, specific advisory or automated); (ii) the impact on the different autonomy conditions (rationality, optionality, independence) and (iii) the neuronal events being predicted/manipulated.

Posted by in Uncategorized


An Open Letter to Senator Roy Blunt: Save Medical Research By Voting No on the BCRA

Dear Senator Blunt,

I am a geneticist in St. Louis, one of your constitutents, and I urge you to vote no on the Senate’s Better Care Reconciliation Act. This act would not only make health care coverage unaffordable for 22 million Americans, as the CBO has estimated, but it would also sabotage medical progress itself through its impact on health care coverage for the millions of people with pre-existing conditions.

Here’s how this would happen. One of the main goals of biomedical scientists like myself is to use advances in genetics to make medical care more effective and less expensive. As we make progress, a growing number of young, seemingly healthy people will discover that they have a genetic risk for a serious disease. In terms of medical care, this is a good thing, because such people can often get treatment before serious symptoms develop.

However, one consequence of early testing to prevent disease is that a seemingly healthy person is suddenly labeled as someone with a pre-existing condition. Without robust insurance protections, those people are doomed to a lifetime of unaffordable health costs. Under the Senate plan, which allows states to waive the requirement that insurance companies cover a broad range of essential health benefits, people at risk for a genetic disease would face a terrible choice: Risk your affordable health coverage by getting a test that may save your life, or skip the test and hope you don’t get sick.

For example, consider a teenager who knows that a sometimes fatal genetic heart condition, such as Long QT syndrome, runs in her family. A genetic test, together with a few other medical tests, will tell her if she has the condition. If the tests are positive, she’ll begin taking a drug that will dramatically lower her risk of dying. But she would also, as someone with a diagnosis of a serious disease, be excluded from affordable health insurance for the rest of her life, if the Senate plan is enacted into law. This disincentive to seek early care harms not only those with genetic diseases, but also all of us, by making genetic medicine more difficult to develop and implement, and thereby undermining medical progress.

Senator, you have consistently been a strong supporter of medical research, and I and my Missouri colleagues are grateful for your support. We urge you to show your support for medical research again by voting no on the Better Care Reconciliation Act.


Michael White, Ph.D.

Filed under: This Mortal Coil Tagged: healthcare, Politics

NY court: Cornell faces being held in contempt after denying physics professor tenure (twice)

Cornell University and a high-powered dean at the school face being held in contempt of court in a case stemming from their decision to deny tenure to a physics professor. Assistant professor Mukund Vengalattore told Retraction Watch he believes the school and the dean are violating a judge’s order instructing them to completely redo his […]

The post NY court: Cornell faces being held in contempt after denying physics professor tenure (twice) appeared first on Retraction Watch.

Cancer paper retracted after author discovers signs of data manipulation

A molecular biology journal has retracted a 2017 cancer paper only two months after it appeared online, after the corresponding author notified the journal about possible data manipulation. According to the notice, Chunsun Fan, from Qidong Liver Cancer Institute & Qidong People’s Hospital in China, requested the retraction after finding “signs of data manipulation” in […]

The post Cancer paper retracted after author discovers signs of data manipulation appeared first on Retraction Watch.

Trolling the uncertainty dial

During the election last year, The New York Times ran an uncertainty dial to show where the vote was swaying. Not everyone appreciated it. Many people hate it. The Outline disliked it enough to troll with an uncertainty dial of their own.

Personally, I like the dial, but I think it does require a certain level of statistical knowledge to not lose your marbles watching it.


Trees do not necessarily help in linguistic reconstruction

In historical linguistics, "linguistic reconstruction" is a rather important task. It can be divided into several subtasks, like "lexical reconstruction", "phonological reconstruction", and "syntactic reconstruction" — it comes conceptually close to what biologists would call "ancestral state reconstruction".

In phonological reconstruction, linguists seek to reconstruct the sound system of the ancestral language or proto-language, the Ursprache that is no longer attested in written sources. The term lexical reconstruction is less frequently used, but it obviously points to the reconstruction of whole lexemes in the proto-language, and requires sub-tasks, like semantic reconstruction where one seeks to identify the original meaning of the ancestral word form from which a given set of cognate words in the descendant languages developed, or morphological reconstruction, where one tries to reconstruct the morphology, such as case systems, or frequently recurring suffixes.

In a narrow sense, linguistic reconstruction only points to phonological reconstruction, which is something like the holy grail of computational approaches, since, so far, no method has been proposed that would convincingly show that one can do without expert insights. Bouchard-Côté et al. (2013) use language phylogenies to climb a language tree from the leaves to the root, using sophisticated machine-learning techniques to infer the ancestral states of words in Oceanic languages. Hruschka et al. (2015) start from sites in multiple alignments of cognate sets of Turkish languages to infer both a language tree, as well as the ancestral states along with the sound changes that regularly occurred at the internal nodes of the tree. Both approaches show that phylogenetic methods could, in principle, be used to automatically infer which sounds were used in the proto-language; and both approaches report rather promising results.

None of the approaches, however, is finally convincing, both for practical and methodological reasons. First, they are applied to language families that are considered to be rather "easy" to reconstruct. The tough cases are larger language families with more complex phonology, like Sino-Tibetan or any of its subbranches, including even shallow families like Sinitic (Chinese), or Indo-European, where the greatest achievements of the classical methods for language comparison have been made.

Second, they rely on a wrong assumption, that the sounds used in a set of attested languages are necessarily the pool of sounds that would also be the best candidates for the Ursprache. For example, Saussure (1879) proposed that Proto-Indo-European had at least two sounds that did not survive in any of the descendant languages, the so-called laryngeals, which are nowadays commonly represented as h₁, h₂, and h₃, and which leave complex traits in the vocalism and the consonant systems of some Indo-European languages. Ever since then, it has been a standard assumption that it is always possible that none of the ancestral sounds in a given proto-language is still attested in any its descendants.

A third interesting point, which I consider a methodological problem of the methods, is that both of them are based on language trees, which are either given to the algorithm or inferred during the process. Given that most if not all approaches to ancestral state reconstruction in biology are based on some kind of phylogeny, even if it is a rooted evolutionary network, it may sound strange that I criticize this point. But in fact, when linguists use the classical methods to infer ancestral sounds and ancestral sound systems, phylogenies do not necessarily play an important role.

The reason for this lies in the highly directional nature of sound change, especially in the consonant systems of languages, which often makes it extremely easy to predict the ancestral sound without invoking any phylogeny more complex than a star tree. That is, in linguistics we often have a good idea about directed character-state changes. For example, if a linguist observers a [k] in one set of languages and a [ts] in another languages in the same alignment site of multiple cognate sets, then they will immediately reconstruct a *k for the proto-language, since they know that [k] can easily become [ts] but not vice versa. The same holds for many sound correspondence patterns that can be frequently observed among all languages of the world, including cases like [p] and [f], [k] and [x], and many more. Why should we bother about any phylogeny in the background, if we already know that it is much more likely that these changes occurred independently? Directed character-state assessments make a phylogeny unnecessary.

Sound change in this sense is simply not well treated in any paradigm that assumes some kind of parsimony, as it simply occurs too often independently. The question is less acute with vowels, where scholars have observed cycles of change in ancient languages that are attested in written sources. Even more problematic is the change of tones, where scholars have even less intuition regarding preference directions or preference transitions; and also because ancient data does not describe the tones in the phonetic detail we would need in order to compare it with modern data. In contrast to consonant reconstruction, where we can do almost exclusively without phylogenies, phylogenies may indeed provide some help to shed light on open questions in vowel and tone change.

But one should not underestimate this task, given the systemic pressure that may crucially impact on vowel and tone systems. Since there are considerably fewer empty spots in the vowel and tone space of human languages, it can easily happen that the most natural paths of vowel or tone development (if they exist in the end) are counteracted by systemic pressures. Vowels can be more easily confused in communication, and this holds even more for tones. Even if changes are "natural", they could create conflict in communication, if they produce very similar vowels or tones that are hard to distinguish by the speakers. As a result, these changes could provoke mergers in sounds, with speakers no longer distinguishing them at all; or alternatively, changes that are less "natural" (physiologically or acoustically) could be preferred by a speech society in order to maintain the effectiveness of the linguistic system.

In principle, these phenomena are well-known to trained linguists, although it is hard to find any explicit statements in the literature. Surprisingly, linguistic reconstruction (in the sense of phonological reconstruction) is hard for machines, since it is easy for trained linguists. Every historical linguist has a catalogue of existing sounds in their head as well as a network of preference transitions, but we lack a machine-readable version of those catalogues. This is mainly because transcriptions systems widely differ across subfields and families, and since no efforts to standardize these transcriptions have been successful so far.

Without such catalogues, however, any efforts to apply vanilla-style methods for ancestral state reconstruction from biology to linguistic reconstruction in historical linguistics, will be futile. We do not need the trees for linguistic reconstruction, but the network of potential pathways of sound change.

  • Bouchard-Côté, A., D. Hall, T. Griffiths, and D. Klein (2013): Automated reconstruction of ancient languages using probabilistic models of sound change. Proceedings of the National Academy of Sciences 110.11. 4224–4229.
  • Hruschka, D., S. Branford, E. Smith, J. Wilkins, A. Meade, M. Pagel, and T. Bhattacharya (2015): Detecting regular sound changes in linguistics as events of concerted evolution. Current Biology 25.1: 1-9.
  • Saussure, F. (1879): Mémoire sur le système primitif des voyelles dans les langues indo- européennes. Teubner: Leipzig.

Teaching skills that save lives

Instructor and student practicing CPR on mannequin.

We observed CPR and AED Awareness Week at the beginning of June. I recently had the opportunity to sit down with Stacy Thorne, a health scientist in the Office of Smoking and Health, who is also a certified first aid, CPR and AED instructor.

Stacy Thorne, PhD, MPH, MCHES
Stacy Thorne, PhD, MPH, MCHES

Stacy has a history of involvement in emergency response and preparedness activities at CDC. She is part of the building evacuation team; a group of employees who make sure that staff gets out of the building in case of a fire; or shelters in place during a tornado. When she learned CDC offered CPR and AED training classes to employees, she couldn’t think of a better way to continue volunteering, while helping people prepare for emergencies.

Stacy became a CPR/AED instructor in 2012. She felt these were important skills to have and wanted to stay up-to-date with the latest guidelines. She said, “You have to get recertified every two years, so if I was going to have to take the class anyway why not teach and make sure other people have the skills to save a life.”

Practice makes perfect

Stacy teaches participants first aid, CPR, and AED skills and gives them an opportunity to practice their skills and make sure they are doing them correctly. The class covers first aid for a wide-variety of emergency situations, including stroke, heart attack, diabetes and heat exhaustion. Participants learn how to:

  • Administer CPR, including the number of chest compressions and the number and timing of rescue breaths
  • Use an Automated External Defibrillator, more commonly referred to as an AED, which can restore a regular heart rhythm during sudden cardiac arrest.
  • Splint a broken bone, administer an epinephrine pen for allergic reactions, and bandage cuts and wounds

In order to receive their certification, all participants must complete a skills test where they demonstrate that they can complete these life-saving skills in a series of scenarios.

Lifesaving skills in actionCardiopulmonary resuscitation, commonly known as CPR, can save a life when someone’s breathing or heartbeat has stopped. CPR can keep blood flowing to deliver oxygen to the brain and other vital organs until help arrives and a normal heart rhythm can be restored.

Stacy shared, “The most rewarding part of teaching is meeting the different people who come to take these classes and hearing the stories of how they have used their skills.” One of her students recalled how she used her CPR skills to save someone while she was out shopping. Her instincts kicked in and when she was able to get the person breathing again the people watching applauded.

Another student reflected, “While I hope I never am in a situation where I need to perform CPR, the notion that I am now equipped with these life-saving skills is reassuring and helps me feel prepared if I should find myself in that scenario.” Stories like these show how important it is for everyone to be trained in first aid, CPR, and how to use an AED. You can spend six hours in training, and walk out with a certification that can save someone’s life.

Always on alert

As the mother of a 6-year old daughter, Stacy is constantly on alert for situations where she might need to use her skills. The closest she has come to using her skills was when her daughter was eating goldfish crackers while laying down and started gagging; she was at the ready to perform the Heimlich maneuver. Her role as an instructor made Stacy feel confident that she could use her first aid, CPR, and AED skills in an emergency.


Applications open for the August 2017 NCBI-NLM Bioinformatics Hackathon

From August 14th – 16th, the NCBI, with involvement from several NIH institutes, will host a Biomedical Data Science hackathon at the National Library of Medicine on the NIH campus. The hackathon will primarily focus on medical informatics, advanced bioinformatics … Continue reading