Google Computers ‘Dream’ Up Imagery

googledream1

Google has published pictures produced by the closest thing to a computer dreaming.

The pictures come from Google’s attempts to refine image recognition. That remains a subject where computers lag far behind human beings, despite having the advantages of speed, accuracy and endurance.

The problem for computers is that their traditional approach of breaking a task down into the smallest possible components and crunching the numbers sequentially can’t compete with the human ability to intuitively and almost instantly recognize patterns and compare multiple possibilities: hence the way we can spot a friend in the street even if they have a new hairstyle or clothes we’ve never seen before.

Google, like several other organizations, has been experimenting with artificial neural networks. These try to simulate the way the brain uses a grid of neurons to create huge numbers of potential pathways and thus be able to run through multiple possibilities at once.

With images, it appears the brain solves the mystery by running through a series of steps. Google says one possibility is that the brain starts out with edges and corners to get an overall shape (which could perhaps be called jigsaw logic) , then looks for familiar shapes and patterns within the picture to get more context, then finally drills down to the fine detail.

Google has now experimented with reversing the process to look for flaws in the way its artificial neuron network detects image components. It did so by feeding in images and asking it to highlight possible examples of a particular component, then feeding that image back in to get even more exaggerated highlighting, and so on in a loop.

It turned out that with the right programming, the network could “spot” almost any pattern in an unconnected image, similar to the way people look for patterns in clouds:

googledream2

In some cases the process highlighted problems with the set-up. For example, asked to spot patterns of dumbbells, the network did so, but every resulting “dumbbell” shape also included a hand and bicep holding it. That turned out to be because Google’s network of images matching the description “dumbbell” was dominated by pictures of people using them in workouts.

The team also tried out having the network create images starting with a picture made up of entirely random noise, similar to an analog TV picture with no station tuned in. The resulting images [pictured top] were effectively created solely by the network interpreting randomness by shaping it to match the subjects that were “on its mind”, similar to one theory about how dreaming works.


Geeks are Sexy needs YOUR help. Learn more about how YOU can support us here.