Google just changed the way you should think about AI

Google’s ability to beat a pro player at the game of Go was big news yesterday, but combined with the news that it purchased chips from Movidius to advance mobile artificial intelligence, Google is challenging a fundamental assumption many have around AI. Talk to most app developers and technologists, and they are designing intelligent services to run in the cloud, because that’s where the compute power is.

But Google’s two announcements pave the way for truly intelligent mobile devices that can be untethered from the Internet, making them faster, more responsive and useful even when they aren’t online. First, it’s worth understanding why teaching a computer to play Go even matters. Both Google, Facebook and Microsoft are attempting this, not just because it’s a challenging game, but because it represents a new type of problem for computers to solve.

Unlike chess, where it is possible to program a computer to recognize all of the potential moves and outcomes, Go requires a computer to have a sense of what possible moves might come next, and to evaluate them. That’s something new for the type of artificial intelligence models researchers are trying to build. Current models for deep learning only teach computers how to learn how to recognize something.

But with Google’s purchase of DeepMind, it bought into research that was attempting to teach computers how to evaluate options and create policies around them. If it sounds terribly esoteric, it is. But it’s the difference between a computer that can learn to recognize an object or a word, and a computer that can recognize an object and then learn how to react to that object.

So teaching a computer to play Go is a foundational technology need wrapped up in a game. What about those chips? The silicon is a big deal because AI training and even execution is pretty power intensive. Right now, there are plenty of people who assume that AI requires the cloud. Today it does, but Google’s chip selection and roadmap its planning with Movidius silicon indicates that the search giant is aggressively hoping to shift its machine learning efforts to mobile devices. And it has a plan to do so.

Combine mobility with the ability to learn and intuit and you have a computer that is far more interesting for all sorts of needs. They will be useful for driverless cars, because they can make decisions faster and can do so even in a dead zone. They could be good in battlefields and industrial settings where the internet is unreliable. They can even be useful in the smart home.

For example, a robotic vacuum that can merely recognize objects in its way is useful, but if it can recognize objects and then reason what to do when it encounters them is more interesting. Recognizing a person might trigger the vacuum to try to match the person to the people who live in the house. If a match isn’t found it could infer that then next best thing to do is notify the homeowner with an image from the camera feed.

The vacuum-maker could program that interaction manually, but as the number of services available to connected devices proliferates, the ability for the products to figure out for themselves what they can do will be immensely helpful. That’s one of the reasons Viv is such an interesting startup. Anyhow, to do any of this, the device needs the abilities that learning how to play Go provides, and it needs a way to learn without requiring a lot of power.

Yesterday Google just laid its cards—or its stones—on the table and told the world that it plans on pursing a vision of AI unthethered by giant servers and an always-on connection. That’s pretty cool.

Leave a Reply

Your email address will not be published. Required fields are marked *