Fake faces created by AI and where this might be headed

It’s grown easier and easier to generate fake faces with AI. For The New York Times, Kashmir Hill and Jeremy White demonstrate the tech with a slick interactive. Quickly adjust age, eye, mood, and gender. All fake.

It was only a few years ago when the idea seemed novel. One year later, there were guides (and warnings) for spotting fake faces. By 2019, there was a marketplace for fake faces (of course). Sometimes it’s scary to think about what the internet will be in five years.

In any case, check out the NYT piece. The smooth transitions between faces, one facial aspect at a time, is mesmerizing.

Tags: , ,

Botnet, a social network where it’s just you and a lot of bots

Botnet is a social media app where you’re the only human among a million bots trained on social media activity. Post pictures, status updates, or whatever else you want. Then let the likes and weird comments roll in.

You can even purchase troll bots, bots that tell dad jokes, and more bots.

Social media is on its way to mostly being bots anyways. Might as well jumpstart the future. Artificial intelligence for the win.

Tags: , ,

AI-generated pies

Janelle Shane applied her know-how with artificial intelligence to generate new types of pies that the world has never seen:

People wonder about what it would be like if a super-intelligent AI decided to place all of humanity in a realistic simulation. I wonder what it would be like if the simulation were built by today’s AI instead – whose computing power is somewhere around the level of an earthworm’s.

Specifically, what would the pies be like?

Mmmm, pie with cassette tapes.

Tags: , ,

AI-generated voice used to fake phone call and steal money

Reporting for The Washington Post, Drew Harwell describes the case of the fake voice used for bad things:

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

Publicly available software that makes it straightforward to impersonate others digitally: what could go wrong?

Tags: , , , ,

AI-generated voice used to fake phone call and steal money

Reporting for The Washington Post, Drew Harwell describes the case of the fake voice used for bad things:

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

Publicly available software that makes it straightforward to impersonate others digitally: what could go wrong?

Tags: , , , ,

Unproven aggression detectors, more surveillance

In some public places, such as schools and hospitals, microphones installed with software listen for noise that sounds like aggression. The systems alert the authorities. It sounds useful, but in practice, the detection algorithms might not be ready yet. For ProPublica, Jack Gillum and Jeff Kao did some testing:

Yet ProPublica’s analysis, as well as the experiences of some U.S. schools and hospitals that have used Sound Intelligence’s aggression detector, suggest that it can be less than reliable. At the heart of the device is what the company calls a machine learning algorithm. Our research found that it tends to equate aggression with rough, strained noises in a relatively high pitch, like D’Anna’s coughing. A 1994 YouTube clip of abrasive-sounding comedian Gilbert Gottfried (“Is it hot in here or am I crazy?”) set off the detector, which analyzes sound but doesn’t take words or meaning into account. Although a Louroe spokesman said the detector doesn’t intrude on student privacy because it only captures sound patterns deemed aggressive, its microphones allow administrators to record, replay and store those snippets of conversation indefinitely.

Marvelous.

Tags: , , , ,

Machine boss

For The New York Times, Kevin Roose on the possibility of machines becoming your boss:

The goal of automation has always been efficiency, but in this new kind of workplace, A.I. sees humanity itself as the thing to be optimized. Amazon uses complex algorithms to track worker productivity in its fulfillment centers, and can automatically generate the paperwork to fire workers who don’t meet their targets, as The Verge uncovered this year. (Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.) IBM has used Watson, its A.I. platform, during employee reviews to predict future performance and claims it has a 96 percent accuracy rate.

Splendid.

Tags: , ,

Building a robot boyfriend

When it comes to robots and love, the concept typically deteriorates to subservient tools to satisfy male fantasies. Creative technologist Fei Lu aims for a more complex relationship with Gabriel2052:

Creating Gabriel2052 is obviously technically challenging, but it’s ultimately a process within my control. He will become something—someone—I can form a lifelong bond with. Through bringing Gabriel2052 to life, I am investigating and confronting the ways in which technology and society create both harmful and uplifting narratives; the ones we’ve become complicit in during our search for love and understanding from others, and the world at large.

So instead of a robot that is purely there to serve, Lu explores a robot that’s a bit closer to human and driven by her emotional needs (and an ex-boyfriend’s text messages) — because inevitably, our relationship with robots will impact our relationships with real people.

Tags: , , ,

When data is not quite what it seems

FiveThirtyEight used a dataset on broadband as the basis for a couple of stories. The data appears to be flawed, which makes for a flawed analysis. From their post mortem:

We should have been more careful in how we used the data to help guide where to report out our stories on inadequate internet, and we were reminded of an important lesson: that just because a data set comes from reputable institutions doesn’t necessarily mean it’s reliable.

Then, from Andrew Gelman and Michael Maltz, there was the closer look at data collected by the Murder Accountability Project, which has its merits but also some holes:

if you’re automatically sifting through data, you have to be concerned with data quality, with the relation between the numbers in your computer and the underlying reality they are supposed to represent. In this case, we’re concerned, given that we did not trawl through the visualizations looking for mistakes; rather, we found a problem in the very first place we looked.

There’s also the ChestXray14 dataset, which is a large set of x-rays used to train medical artificial intelligence systems. Radiologist Luke Oakden-Rayner looked closer, and the dataset appears to have its issues as well:

In my opinion, this paper should have spent more time explaining the dataset. Particularly given the fact that many of the data users will be computer scientists without the clinical knowledge to discover any pitfalls. Instead, the paper describes text mining and computer vision tasks. There is one paragraph (in eight pages), and one table, about the accuracy of their labeling.

For data analysis to be meaningful, for it to actually work, you need that first part to be legit. The data. If the data collection process rates poorly, missing data outnumbers observations, or computer-generated estimates aren’t vetted by a person, then there’s a good chance anything you do afterwards produces questionable results.

Obviously this isn’t to say avoid data altogether. Every abstraction of real life comes with its pros and cons. Just don’t assume too much about a dataset before you examine it.

Tags: ,

Google A.I. Experiments

AI experiments

In an effort to get people more interested in and to learn about artificial intelligence, Google just launched A.I. Experiments to showcase the technology in fun ways.

With all the exciting A.I. stuff happening, there are lots of people eager to start tinkering with machine learning technology. A.I. Experiments is a showcase for simple experiments that let anyone play with this technology in hands-on ways, through pictures, drawings, language, music, and more.

You can also download the code for each project and have a go yourself.

Tags: ,