People font

You know those graphics that use icons of people to represent units or counts of people? The Wee People font by Alberto Cairo and Scott Klein makes it easier to use such icons on the web. Just add the CSS file and you’re ready to go.

Tags: , ,

Alternatives to the rainbow color scale

Oh. It’s that time of year already. Time to hate on the rainbow color scale, which is still prevalent but equally less useful than alternatives. Matt Hall provides (scientific!) reasons for looking to scales that don’t include the full spectrum and some solutions.

We know what kind of colourmaps are good for interpretation: those that increase linearly and monotonically in brightness, with no jumps or stripes of luminance. I’ve linked to lots of places where you can read about these — see the end of the post. You already know one perceptual colourmap: the humble Greyscale. But there are lots of others, so let’s start with one of them.

Tags: ,

Importance of form and survey design to gain an accurate picture

Lena Groeger, writing for Source, shifts attention upstream from analysis to the design of forms in the data collection process.

Whether you’re filling out a form or building it yourself, you should be aware that decisions about how to design a form have all kinds of hidden consequences. How you ask a question, the order of questions, the wording and format of the questions, even whether a question is included at all—all affect the final result. Let’s take a look at how.

Census surveys, election ballots, and racial profiling. Oh my.

Tags: ,

Choosing color palettes for choropleth maps

Choropleth maps, the ones where regions are filled with colors based on data, grow easier to make. However, choosing colors, the number of colors, and the breakpoints is often less straightforward, because the answer is always context-specific. Lisa Charlotte Rost, now at Datawrapper, provides a rundown of the decision process.

The explanation is in the context of the Datawrapper tool, but you can easily apply the logic to your own workflow.

Tags: , ,

Project Lincoln from Adobe aims to reverse data visualization workflow

With data visualization, you start with the data and let it guide geometry, colors, etc, and from there, you work on aesthetics, readability, and usability. The data informs the design. Project Lincoln is an experiment from Adobe that flips this. You draw shapes and illustrations first and then bind data to them.

Here it is in action:

My brain was confused. Something about this order of things doesn’t feel right. You go in with design first and then bring in the data, and then you edit again? Maybe this would be useful for quick prototypes or visual experiments? It’s hard to say how this would go in practice without actually trying it out, but my gut says no.

Tags: ,

Infographic design sins in meme form

Visual editor Xaquín G.V. recently used the distracted boyfriend meme to represent our attraction to novel visualization methods when a simple and visually sound method is right there at our disposal.

Then he ran with it to illustrate his professional sins as an editor for a news desk.

Tags: ,

Posted by in design, meme, sins

Tags: ,

Permalink

Criticism vs. Creation

Filmmaker Kevin Smith talks about making things versus critiquing them. He’s talking about movies, but you can so easily plug in visualization. I just kept nodding yes. [via swissmiss]

Tags:

Data exploration banned

Statistician John Tukey, who coined Exploratory Data Analysis, talked a lot about using visualization to find meaning in your data. You don’t always know what you’re looking, so you explore it visually. Etyn Adar, who teaches information visualization at the University of Michigan, makes a good case for banning the phrase in his students’ project proposals.

For all the clever names he created for things (software, bit, cepstrum, quefrency) what’s up with EDA? The name is fundamentally problematic because it’s ambiguous. “Explore” can be both transitive (to seek something) and intransitive (to wander, seeking nothing in particular). Tukey’s book seems emphasize the former — it’s full of unique graphical tools to find certain patterns in the data: distribution types, differences between distributions, outliers, and many other useful statistical patterns. The problem is that students think he meant the latter.

I see this sort of thing in my suggestion box too. Data exploration with visualization is good, but when someone describes their project as an exploration tool, it often means it lacks focus or direction. Instead it looks like generic graphs that don’t answer anything particular and leave all interpretation to the reader.

Tags: , ,

Use dual axes with care, if at all

Dual axes, where there are two value scales in a single chart, are almost never a good idea. As a reader, you should always question the source when you see a chart that uses such scales. Zan Armstrong explains with a recent example.

One of the best descriptions I’ve heard for data viz is that: when the data is different, the viz should look different and when the data is similar, the viz should look similar.

If you allow yourself to have two y-axis for the same metric, with both a different scale on each axis and a different base value, then you can make a lot of charts with the exact same data that look very different.

If there’s a direct transformation between the scales, say between metric and Imperial units, then okay, that’s fine. In almost all other cases, people use dual axes to overemphasize a relationship between two variables, and you should wonder why the maker did that.

Tags: ,

Data bias at every step

Lena Groeger for ProPublica describes when the designer shows up in the design, not just in the visualization part but also in collection, selection, and aggregation. Our perspective always comes to play.

The effects may be subtle, but if we pour so much of ourselves into the stories we tell, the data we gather, the visuals we design, the webpages we build, then we should take responsibility for them. And that means not just accepting the limits of our own perspective, but actively seeking out people who can bring in new ones.

It’s common to think of data and analysis as unbiased fact. Concrete. You can’t argue with numbers. However, that’s rarely the case. We analyze and visualize with preconceptions, and that drives many aspects of whatever comes next.

Analysis is a process driven by experience. Technically, this means learning new methods as you look at various data types and situations. Contextually, this means forming conclusions based on what you know about the subject matter. If there are knowledge gaps technically or contextually, you run into issues.

Tags:

Posted by in Bias, design

Tags:

Permalink