A colorblind view of the web

If you don’t use a colorblind-safe color palette in your maps and charts, a significant percentage of people will get nothing out of your work. For The Verge, Andy Baio, who is colorblind, discusses the experience across the web:

Because red and green are complementary colors opposite one another on the color wheel, they’ve become the default colors for every designer who wants to represent opposites: true and false, high and low, stop and go.

Inconveniently, these are also the two colors most likely to be mixed up by people with color vision deficiencies.

I wish every designer in the world understood this and would switch to, say, red and blue for opposing colors. But I know that won’t happen: the cultural meaning is too ingrained.

They used a slider mechanism to show what people with normal vision see and then what Baio sees. I’m usually not into the slider, which often shows a before-and-after view that is meant to highlight contrast. In this case, the views are so different that the contrast works.

Tags: , ,

AI-based image generation ethics

AI-based image generation is having a moment. Time some text and you can get a piece of art that resembles the style of your favorite artist. However, there’s an ethical dilemma with the source material. Andy Baio talked to Hollie Mengert, whose artwork was used to create a model for Stable Diffusion:

“For me, personally, it feels like someone’s taking work that I’ve done, you know, things that I’ve learned — I’ve been a working artist since I graduated art school in 2011 — and is using it to create art that that I didn’t consent to and didn’t give permission for,” she said. “I think the biggest thing for me is just that my name is attached to it. Because it’s one thing to be like, this is a stylized image creator. Then if people make something weird with it, something that doesn’t look like me, then I have some distance from it. But to have my name on it is ultimately very uncomfortable and invasive for me.”

AI-generated charts are only tangentially a thing so far. We humans still have a leg up in the context and meaning part of understanding data.

Tags: , , , , ,

Images behind the generated images from Stable Diffusion

People have been having fun with the text-to-image generators lately. Enter a description, and the AI churns out believable and sometimes detailed images that match the input. The reason these systems work is because the models were trained on a lot of data, in the form of images. Andy Baio and Simon Willison made a tool to browse a subset of this data behind the recently released Stable Diffusion.

Tags: , , ,