An Artificial Intelligence Learns How to Navigate Cities Using Only Google Street View
Wikimedia Commons
It's not easy to navigate without a map. Even if you've lived in a certain city for your entire life, there's occasionally going to be some restaurant that forces you to pull up a map or GPS to remember how you get there.
But among the many ways that artificial intelligence is gradually surpassing humans, running around a city is quickly becoming one of them. A team of researchers at Google's DeepMind - the creators of powerful AI programs like AlphaGo - have been teaching an AI how to run around cities without any maps, relying only on images from Google Street View to figure out where it is.
Of course, since an AI is just a virtual program until the day where they all get slick robot bodies (or at least something with wheels), they had to navigate virtual versions of cities like New York, Paris, and London. But they're getting quite good at it.
In an informational video, DeepMind shows the AI program being placed at a certain landmark or location in one of these virtual cities compiled using Google Maps data, only to be given a new destination. Without looking at a map, and with no prior knowledge of the city's layout, the AI learns to rely entirely on visual cues from Street View images to determine where exactly in the neighborhood it is.
It's not long until the AI can learn to go from point A to point B using only this knowledge. In essence, the AI can put together its own map of different cities without ever seeing a real map. DeepMind's website phrases it this way:


The AI was tested in several different neighborhoods in New York, like Harlem, Midtown, Greenwich Village, and Lower Manhattan, as well as some other areas like London Central and Paris Rive Gauche.
Once it's ready for commercial use, this technology could become useful for teaching self-driving cars to move around without GPS data. Out of the many concerns about autonomous vehicles, one difficult problem to solve is how to let the car navigate once its cell service is cut off, and tools like this could help the car AI quickly get used to city streets even when the GPS is down.
Soon, our cars might know our hometowns better than we do. Whether this is good or bad depends mostly on your pride, we suppose.
But among the many ways that artificial intelligence is gradually surpassing humans, running around a city is quickly becoming one of them. A team of researchers at Google's DeepMind - the creators of powerful AI programs like AlphaGo - have been teaching an AI how to run around cities without any maps, relying only on images from Google Street View to figure out where it is.
Of course, since an AI is just a virtual program until the day where they all get slick robot bodies (or at least something with wheels), they had to navigate virtual versions of cities like New York, Paris, and London. But they're getting quite good at it.
Navigating through real-world environments is a basic capability of intelligent agents. In "Learning to Navigate in Cities Without a Map", we present a deep RL architecture that captures locale-specific features while enabling transfer to multiple cities: https://t.co/I7rRY1Yxe8 pic.twitter.com/l7FESxgLVp
— DeepMind (@DeepMindAI) 3 April 2018
In an informational video, DeepMind shows the AI program being placed at a certain landmark or location in one of these virtual cities compiled using Google Maps data, only to be given a new destination. Without looking at a map, and with no prior knowledge of the city's layout, the AI learns to rely entirely on visual cues from Street View images to determine where exactly in the neighborhood it is.
It's not long until the AI can learn to go from point A to point B using only this knowledge. In essence, the AI can put together its own map of different cities without ever seeing a real map. DeepMind's website phrases it this way:

"Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation ("I am here'') and a representation of the goal ("I am going there'').
Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale."
Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale."

The AI was tested in several different neighborhoods in New York, like Harlem, Midtown, Greenwich Village, and Lower Manhattan, as well as some other areas like London Central and Paris Rive Gauche.
Once it's ready for commercial use, this technology could become useful for teaching self-driving cars to move around without GPS data. Out of the many concerns about autonomous vehicles, one difficult problem to solve is how to let the car navigate once its cell service is cut off, and tools like this could help the car AI quickly get used to city streets even when the GPS is down.
Soon, our cars might know our hometowns better than we do. Whether this is good or bad depends mostly on your pride, we suppose.