Here is Google’s vision for the future of computing: As you drive home from work, you tell your car, “Ok, Google,” triggering the company’s Assistant. You order food, the digital helper handles the transaction and makes sure it’s ready when you arrive. Right now, Amazon.com Inc. and its Alexa digital assistant are closer to realizing that goal, having cut a deal this year with Ford Motor Co. to let drivers search, shop and control other devices by voice from their vehicles.
That’s just one of the ways Amazon is outpacing Google in the race to weave a digital assistant into consumers’ lives. What’s more, Amazon has a leading e-commerce business well suited to this emerging world, with a massive delivery network to speed orders to shoppers — an area where Google has struggled. Amazon’s Echo connected speakers, launched in 2014, have given Alexa an early lead by reaching millions of users at home, while Google’s rival Home device only came out late last year. “Amazon kind of fell into this lead in 2014 because it wanted to sell more things to people in new ways,” said Brian Roemmele, founder of ReadMultiplex.com, a website about voice-based commerce and computing. “Google is trying to evolve its online search experience into devices that have a voice. That philosophy limits its ability to really go after Amazon.”
Central to Alexa’s appeal: Amazon has thousands of voice-based apps up and running, far out-numbering Google’s tally. The Alphabet Inc. unit used its I/O developer conference this week to try to narrow Amazon’s lead by wooing skilled programmers capable of building tools and services that can make the Google Assistant more useful.
The company stressed experience with mobile software and the millions of things it already understands on the web and in the real world. Vehicle and payment functions were rolled out to expand the Assistant beyond the more simple ways people use it now, such as setting alarms and playing music.
“A lot of what you need to get done is transactional,” said Gummi Hafsteinsson, product director for the Assistant. “We take care of all the nitty-gritty hard parts.”
Technology giants think voice-based computing could be the next big platform, after mobile. The company that wins will have the most users talking to devices and the most developers creating new experiences. Google’s Assistant has only been active for six months. That’s a major disadvantage when it comes to creating that virtuous circle of users and developers.
It tried to get that circle spinning faster at I/O. About 7,000 conference attendees were offered a free Google Home speaker and $700 worth of credits for its cloud-computing service. The company is hoping these developers will use both to build and test new voice-based apps (known as Actions on Google) for the company’s Assistant.
It needs to fire up these developers, many of whom have already been building for Amazon’s Alexa system. By February, there were 10,000 Alexa Skills — the equivalent of an app for Amazon’s voice-based system — up from 1,000 in June 2016. Bloomberg News counted fewer than 300 Actions for Google’s Assistant built by outside developers on Friday afternoon. Last June, the voice-based technology wasn’t available publicly yet.
Neither company is anywhere close to the dream of a naturally conversant machine. But Google is leaning on its expertise in artificial intelligence fields like voice recognition and automated language understanding to push it ahead if or when this form of human-computer interaction takes off. It also hopes long experience collecting and organizing information from the web will provide a useful fallback solution when the company’s Assistant lacks a specific Action to address user questions.
Take cooking recipes, an early use for hands-free speakers. By November, Alexa could talk budding home chefs through more than 60,000 recipes. That came from a Skills integration with the cooking app Allrecipes. The latest Echo Look device, with a screen, is particularly well suited to voice-based kitchen use because it can show people recipes, as well as tell them about ingredients and cooking steps.
Bloomberg News asked Google’s Assistant to talk to Allrecipes on Friday afternoon, and the digital helper sent web links rather than an integrated voice-based Action. However, the Assistant has access to 45,000 recipe websites that have already been marked up with special code that lets Google’s software read out ingredients and other related information, Hafsteinsson said.
“Google has better understanding. It has been working on AI much longer,”said Patricia Carando, a mobile developer attending the I/O conference. She’s been working on an Alexa Skill, but she was learning how to create an Action for Google’s Assistant during coding classes at the event on Thursday. Eventually, consumers will pick one digital assistant “and stick with it,” pushing developers to mostly build on that platform, she added.
A deep concern for Google is that more people will stick with Amazon or a potential device from other competitors like Apple Inc.
You May Also Like To Watch This:
That’s behind Google’s rush to integrate its Assistant with any and all connected devices. At I/O, Google executives unveiled Home Graph, a system for connecting and controlling smart home devices by speaking to the Assistant. There are about 70 outside companies working with Google to sync things like dishwashers, fridges and light bulbs to the voice-based platform, including GE Appliances, Osram Lighting and Leviton, the internet giant said.
Even so, Amazon is making the most of its earlier start. The day before I/O began, a new Amazon program emerged for outside developers working with the company’s Alexa digital assistant, which controls its Echo devices and a rising number of other gadgets. If users talk a lot to a developer’s Alexa Skill, then Amazon will send a reward of cold hard cash, according to note posted Amazon’s Alexa developer website.
On Amazon’s website, there are hundreds of home devices already for sale that work with Alexa, made by roughly 30 companies such as Samsung Electronics Co., Philips and Wink.
Google is taking pains to show how easy it is to use its tools. During an I/O demo in front of a packed crowd of developers on Thursday, a Google engineer built a simple Action for the Assistant that turned on a set of virtual light bulbs, then turned them green, by voice. The company is working on “What it means to be a washing machine,” the engineer said, to giggles from some in the audience.
Google is also working on what it means to be an assistant. Organizing web information and supporting apps is familiar terrain. Discerning what consumers want to hear from their voice-based devices is new territory. At an another panel during the week, Google engineers showed developers a list of criteria the company uses to decide what Actions and other data to surface within its new Assistant world. Those included questions about whether end users had to repeat requests, learned something or “LOL” (laughed out loud). The company rarely exposes such specific guidelines. It’s famously secretive about these types of criteria when it comes to search results, to avoid website developers gaming the system and ruining the experience. The fact that it’s being so open, shows how eager it is to lure a new army of outside voice-based developers.
“Please, please think about those things,” Valerie Nygaard, a Google product manager, told the audience. “These are the questions we ask ourselves.”