Coronavirus testing accuracy

Medical tests do not always provide certain results. Quartz illustrated this with the accuracy of a simulated antibody test that identifies 90% of those infected and 95% of those not infected:

That means that if you took the test and got a positive result, there’s a 45.1% chance it’s correct. If you got a negative result, there’s a 99.6% chance your result is accurate.

Of course, this doesn’t mean don’t take the test. Detecting 90% of infections with false positives is a good thing. However, it does help to understand what the numbers mean before you do anything with them.

Tags: , , ,

Reduced privacy risk in exchange for accuracy in the Census count

Mark Hansen for The Upshot describes the search for balance between individual privacy and an accurate 2020 Census count. It turns out to not be that difficult to reconstruct person-level data from publicly available aggregates:

On the face of it, finding a reconstruction that satisfies all of the constraints from all the tables the bureau produces seems impossible. But Mr. Abowd says the problem gets easier when you notice that these tables are full of zeros. Each zero indicates a combination of variables — values for one or more of block, sex, age, race and ethnicity — for which no one exists in the census. We might find, for example, that there is no one below voting age living on a particular block. We can then ignore any reconstructions that include people under 18 living there. This greatly reduces the set of viable reconstructions and makes the problem solvable with off-the-shelf software.

To combat this, the Census is looking into injecting more uncertainty into their published data. The challenge is figuring out how much uncertainty is too much and what level of privacy is enough.

Tags: , , ,

Unreliable gun data from the CDC

FiveThirtyEight and The Trace investigate the uncertainty and accuracy of gun injury data released by the Centers for Disease Control and Prevention:

An analysis performed by FiveThirtyEight and The Trace, a nonprofit news organization covering gun violence in America, found that the CDC’s report of a steady increase in nonfatal gun injuries is out of step with a downward trend we found using data from multiple independent public health and criminal justice databases. That casts doubt on the CDC’s figures and the narrative suggested by the way those numbers have changed over time.

It might be time to update the statistical models used to estimate injuries.

Tags: , , ,

How people interpret probability through words

In the early 1990s, the CIA published internal survey results for how people within the organization interpreted probabilistic words such as “probable” and “little chance”. Participants were asked to attach a probability percentage to the words. Andrew Mauboussin and Michael J. Mauboussinran ran a public survey more recently to see how people interpret the words now.

The main point, like in the CIA poll, was that words matter. Some words like “usually” and “probably” are vague, whereas “always” and “never” are more certain.

I wonder what results would look like if instead of showing a word and asking probability, you flipped it around. Show probability and then ask people for a word to describe. I’d like to see that spectrum.

Tags: ,