Virtual proctoring simulation

Many colleges use virtual proctoring software in an effort to reduce cheating on tests that students take virtually at home. But the software relies on facial recognition and assumptions about the proper testing environment. YR Media breaks down the flaws and even provides a simulation so that you can see what it’s like.

Tags: , , ,

Simulating how just a little gender bias in the workplace can lead to big effects up the chain

Yuhao Du, Jessica Nordell, and Kenneth Joseph used simulations to study the effects of small gender biases at entry level up to executive level. It doesn’t take much to skew the distribution. For NYT Opinion, Yaryna Serkez shows the simulation in action with moving bubbles and stacked area charts for each work level.

The simulation imagines a company where female performance is undervalued by 3 percent. Each dot represents an employee, and they either move up with promotions or stay still. The distribution of men and women start even but end very uneven.

Tags: , , ,

✚ Looking at What’s Not There – The Process 140

Welcome to issue #140 of The Process, the newsletter for FlowingData members where we look closer at how charts are made. I’m Nathan Yau, and this week I’m curious about what you can glean from what you can’t see in the data.

Become a member for access to this — plus tutorials, courses, and guides.

Algorithm leads to arrest of the wrong person

Even though there was supposedly a person in the decision-making process and a surveillance photo wasn’t actually Robert Julian-Borchak Williams, he still ended up handcuffed in front of his own home. Kashmir Hill reporting for The New York Times:

This is what technology providers and law enforcement always emphasize when defending facial recognition: It is only supposed to be a clue in the case, not a smoking gun. Before arresting Mr. Williams, investigators might have sought other evidence that he committed the theft, such as eyewitness testimony, location data from his phone or proof that he owned the clothing that the suspect was wearing.

In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him. (Ms. Johnston declined to comment.)

Tags: , , ,

Face depixelizer with machine learning, and some assumptions

In crime shows, they often have this amazing tool that turns a low-resolution, pixelated image of a person’s face to a high-resolution, highly accurate picture of the perp. Face Depixelizer is a step towards that with machine learning — except it seems to assume that everyone looks the same.

There might still be some limitations.

Tags: , ,

Dataset as worldview

Hannah Davis works with machine learning, which relies on an input dataset to build a model of the world. Davis was working with a model for a while before realizing the underlying data was flawed:

This led to a perspective that has informed all of my work since: a dataset is a worldview. It encompasses the worldview of the people who scrape and collect the data, whether they’re researchers, artists, or companies. It encompasses the worldview of the labelers, whether they labeled the data manually, unknowingly, or through a third party service like Mechanical Turk, which comes with its own demographic biases. It encompasses the worldview of the inherent taxonomies created by the organizers, which in many cases are corporations whose motives are directly incompatible with a high quality of life.

Tags: ,

Myth of the impartial machine

In its inaugural issue, Parametric Press describes how bias can easily come about when working with data:

Even big data are susceptible to non-sampling errors. A study by researchers at Google found that the United States (which accounts for 4% of the world population) contributed over 45% of the data for ImageNet, a database of more than 14 million labelled images. Meanwhile, China and India combined contribute just 3% of images, despite accounting for over 36% of the world population. As a result of this skewed data distribution, image classification algorithms that use the ImageNet database would often correctly label an image of a traditional US bride with words like “bride” and “wedding” but label an image of an Indian bride with words like “costume”.

Click through to check out the interactives that serve as learning aids. The other essays in this first issue are also worth a look.

Tags: , , ,

Systematic Reviews & Meta-Analyses: A 5-Step Checkup

  It’s easy to be a little blinded by the specialized statistical techniques in systematic reviews and meta-analyses. As with any type of study, though, there are bad ones that can lead you down a

Based on your morals, a debate with a computer to expose you to other points of view

Collective Debate from the MIT Media Lab gauges your moral compass with a survey and then tries to “debate” with you about gender bias using counterpoints from the opposite side of the spectrum. The goal isn’t to be right. Instead, it’s to try to understand the other side. At the end, you see how you compare to others.

Tags: ,

The Case of the Missing Neuro Drug Trials

0000-0002-8715-2896     The case of the missing neurological drug trials remains shrouded in mystery. Nearly 48,000 people took part in these trials for new drugs for multiple sclerosis, stroke, Alzheimer disease, migraine, epilepsy, insomnia, and Parkinson