AI and the American smile

Jenka Gurfinkel discusses the appearance of the American smile in AI-generated images and its implications in interpreting data:

Every American knows to say “cheese” when taking a photo, and, therefore, so does the AI when generating new images based on the pattern established by previous ones. But it wasn’t always like this. More than a century after the first photograph was captured, a reference to “cheesing” for photos first appeared in a local Texas newspaper in 1943. “Need To Put On A Smile?” the headline asked, “Here’s How: Say ‘Cheese.’” The article quoted former U.S. ambassador Joseph E. Davies who explained that this influencer photo hack would be “Guaranteed to make you look pleasant no matter what you’re thinking […] it’s an automatic smile.” Davies served as ambassador under Franklin D. Roosevelt to the U.S.S.R.

My natural face is generously non-smiley, so this resonated.

Tags: , , ,

Bias in AI-generated images

Lensa is an app that lets you retouch photos, and it recently added a feature that uses Stable Diffusion to generate AI-assisted portraits. While fun for some, the feature reveals biases in the underlying dataset. Melissa Heikkilä, for MIT Technology Review, describes problematic biases towards sexualized images for some groups:

Lensa generates its avatars using Stable Diffusion, an open-source AI model that generates images based on text prompts. Stable Diffusion is built using LAION-5B, a massive open-source data set that has been compiled by scraping images off the internet.

And because the internet is overflowing with images of naked or barely dressed women, and pictures reflecting sexist, racist stereotypes, the data set is also skewed toward these kinds of images.

This leads to AI models that sexualize women regardless of whether they want to be depicted that way, Caliskan says—especially women with identities that have been historically disadvantaged.

Tags: , , , , ,

Find a color palette based on words

PhotoChrome is a straightforward tool that lets you use search terms to find a color palette. Just enter a query, and it spits out a color scheme of hex values based on matching images.

It’s like Picular from a few years ago but more focused with a copy-paste.

Tags: , ,

Neural network creates images from text

OpenAI trained a neural network that they call DALL·E with a dataset of text and image pairs. So now the neural network can take text input and output random combinations of descriptors and objects, like a purse in the style of Rubik’s cube or a teapot imitating Pikachu.

Tags: , , ,

Explore generative models and latent space with a simple spreadsheet interface

Generative models can seem like a magic box where you plug in observed data, turn some dials, and see what the computer spits out. SpaceSheet is a simple spreadsheet interface to explore and experiment for a clearer view of the spaces between. Even if you’re not into this research area, it’s fun to click and drag things around to see what happens.

Tags: , ,

Triangulate a picture

Triangulate, a fun tool made by Michael Freeman, lets you upload a picture and it randomly assigns points to output something that looks pixelated but with triangles. Give it a try.

Tags: ,

Algorithm to detect wildfires earlier

Traditional detection algorithms use infrared heat as the main signal of a wildfire. The Firelight Detection Algorithm uses visible light instead, detecting a fire possibly a day earlier.

FILDA uses the visible light of fire, detecting high resolution images of fires. Using VIIRS technology, images of fires at night can be captured using infrared and visible light information. FILDA can also detect 90% more pixels than previous methods, and can detect smoldering and flaming fires. This allows researchers to see when fires start, when they may be dormant, or what weather events contribute to the spread of the fire.

Tags: ,

Fast image classifications in real-time

NeuralTalk2 uses neural networks to caption images quickly. To demonstrate, the video below shows a webcam feed that continuously updates with new image captions based on what the computer sees. It's not perfect of course, but the performance is impressive.

Tags: , ,

Map of book subjects on Internet Archive

Internet Archive book subjects

The Internet Archive makes millions of digitized books available in the form of scanned pages, and these books are categorized into thousands of subjects. Focusing on book images, Mario Klingemann mapped subjects, based on tag similarity. Browse and discover new reading material.

This map offers an alternative way to browse the 2,619,833 images contained in the Internet Archive's book collection. It shows 5500 different subjects which have been algorithmically arranged by their thematic relationships. The size of each link resembles the amount of images that are available for that topic. Clicking on a link will open the flickr page containing all the pictures for that subject. Rolling over a link will highlight all the topics that have a direct link with the subject.

I recommend browsing towards the middle in the medical cluster for some weird, old-school healing techniques.

Tags: ,