Google Faces
searching for faces on Google Maps

2013

Collaboration with:
Christian Loclair
idea
An independent searching agent hovering the world to spot all the faces that are hidden on earth.
The way we perceive our environment is a complex procedure. By the help of our vision we are able to recognize friends within a huge crowd, approximate the speed of an oncoming car or simply admire a painting. One of human’s most characteristic features is our desire to detect patterns. We use this ability to penetrate into the detailed secrets of nature. However we also tend to use this ability to enrich our imagination. Hence we recognize meaningful shapes in clouds or detect a great bear upon astrological observations.

Objective investigations and subjective imagination collide to one inseparable process. The tendency to detect meaning in vague visual stimuli is a psychological phenomenon called Pareidolia, and captures the core interest of this project.
We were driven by the idea, to explore how the psychological phenomenon of Pareidolia, could be generated by a machine. We wrote an algorithm simulating this tendency, as it continuously searches for face-like shapes while iterating above the landscapes of the earth. As a major inspiration we took a look at the “Face on Mars” taken by Viking 1 on July 25, 1976.
software
One of the key aspects of this project, is the autonomy of the face searching agent and the amount of data we are investigating. The source of our image data is halfway voluntary provided by Google Maps. Our agent flips through one satellite image after the other, in order to feed the face detection algorithm with landscape samples. The corresponding iteration algorithm steps sequentially along the latitude and longitude of our globe. Once the agent circumnavigated the world, it switches to the next zoom level and starts all over again.

In order to process the face detection algorithm on top of different satellite images and store the geographical coordinates, we needed a precise communication between our standalone application and a virtual browser surfing Google Maps. Therefore we decided to use ofxBerkelium, which is an OpenFrameworks wrapper for Berkelium. This library offers the possibility to capture browser images within a standalone application and to communicate via Javascript.



To ensure that the computer vision runs under optimal conditions and detects as many quality faces as possible we used the face tracking library written by Jason Saragih which gives pretty stable results. Furthermore OfxFaceTracker enabled us to easily access the corresponding functionalities in OpenFrameworks.
results
Our Facetracker already circumnavigated the world a couple of times and astonished us with quite versatile results. As it continues to travel the world within the upcoming months, it continuously zooms into the earth. This process decreases the step-size for each iteration and therefore increases the amount of images and travel time exponentially. Some of the detected images aren’t usable at all, as we are not able to recognize any face-like patterns within the detected images. Other satellite images, on the other hand, inspired our imagination in a tremendous, yet funny way. However the search goes on, as our diligent robot continuous investigation. Below we have collected several images already found. Click on the images to see it directly on Google Maps.