Here’s how a neural network works

Neural network

Daniel Smilkov and Shan Carter at Google put together this interactive learner for how a neural network works. In case you’re unfamiliar with the method:

It’s a technique for building a computer program that learns from data. It is based very loosely on how we think the human brain works. First, a collection of software “neurons” are created and connected together, allowing them to send messages to each other. Next, the network is asked to solve a problem, which it attempts to do over and over, each time strengthening the connections that lead to success and diminishing those that lead to failure.

I took one course on neural networks in college and poked around at those parameters for hours for various homework assignments and projects. I was basically a monkey pushing at buttons to see what images I could produce. I wish I had something like this to mess around with, so I could actually see the process.

Tags: ,

Playing with fonts using neural networks

Font neural network

Erik Bernhardsson downloaded 50,000 fonts and then threw them to the neural networks to see what sort of letters a model might come up with.

These are all characters drawn from the test set, so the network hasn’t seen any of them during training. All we’re telling the network is (a) what font it is (b) what character it is. The model has seen other characters of the same font during training, so what it does is to infer from those training examples to the unseen test examples.

I especially like the part where you can see a spectrum of generated fonts through varying parameters.

Tags: ,

Fast image classifications in real-time

NeuralTalk2 uses neural networks to caption images quickly. To demonstrate, the video below shows a webcam feed that continuously updates with new image captions based on what the computer sees. It's not perfect of course, but the performance is impressive.

Tags: , ,

Neural Network for selfie analysis

Bad good selfie

To introduce Convolutional Neural Networks, Andrej Karpathy looked at millions of selfies, left the computer to its own devices, and tried to find what makes a good selfie.

Okay, so we collected 2 million selfies, decided which ones are probably good or bad based on the number of likes they received (controlling for the number of followers), fed all of it to Caffe and trained a ConvNet. The ConvNet "looked" at every one of the 2 million selfies several tens of times, and tuned its filters in a way that best allows it to separate good selfies from bad ones. We can't very easily inspect exactly what it found (it's all jumbled up in 140 million numbers that together define the filters). However, we can set it loose on selfies that it has never seen before and try to understand what it's doing by looking at which images it likes and which ones it does not.

Key tips: Use a filter and border, crop off your forehead, and most importantly, be a female. If male, broader selfies appear to be more ideal.

Naturally, there are questions to ask about cause-and-effect here. For example, in the neural network training, a selfie with a lot of likes qualifies as a good one. It might be that people with more followers tend more to a certain type of photo. Maybe they're following a trend. Or maybe females take more selfies than men, which is why there are far more women that appear in the top.

In any case, that sort of misses the point. The point here is that neural networks can be fun and can output some interesting stuff. Be sure to scroll to the end of the article for resources and tools that you can play with.

Tags: ,

Automated Super Mario World gameplay through machine learning

Seth Bling made a bot — MarI/O — that automatically learns how to play Super Mario World. It's based on research by Kenneth O. Stanley and Risto Miikkulainen from 2002 that uses neural networks that evolve with a genetic algorithm. MarI/O starts out really dumb, just standing in place, but after enough simulations it get smart enough to navigate the world.

Code available here and the paper from Stanley and Mikkulainen is here.

See also: the genetic algorithm walkers.

Tags: , , ,

Translating images to words

Images into words

With Google's image search, the results kind of exist in isolation. There isn't a ton of context until you click through to see how an image is placed among words. So, researchers at Google are trying an approach similar to how they translate languages to automatically create captions for the images.

Now Oriol Vinyals and pals at Google are using a similar approach to translate images into words. Their technique is to use a neural network to study a dataset of 100,000 images and their captions and so learn how to classify the content of images.

But instead of producing a set of words that describe the image, their algorithm produces a vector that represents the relationship between the words. This vector can then be plugged into Google’s existing translation algorithm to produce a caption in English, or indeed in any other language. In effect, Google’s machine learning approach has learnt to “translate” images into words.

Tags: , ,