If you build conversational interfaces and have been waiting for Google to open up Actions on Google, here is some good news. Google has recently announced that it will allow all developers to use the platform to build out conversational experiences targeted toward Google devices and services.
The official blog post laid out details for initial setup. Actions on Google current only works on Google Home, the company’s voice-enabled device and competitor to Amazon’s Alexa. It is expected that in early next year, Actions that are now limited to Google Home will work seamlessly on the Google Assistant that comes with the Google Pixel phone and Google Allo application.
There are two kinds of Google Actions that will eventually be supported: Conversation and Direct Actions. The current release only supports Conversation Actions, which is a one-one conversation with an Agent, where the Agent will deliver the information or perform the task requested. Direct Actions enable the control of other hardware, like lights, and are currently not available.
When comparing Google Home and Amazon Echo, many claim that the Echo device has a lead, in part because of its early mover advantage and the thousands of 3rd party Alexa Skills that developers have already built. The key difference in the approach is that unlike Alexa skills, which users have to enable on their Echo device, all Actions written for Google will be made available to all users and they will not have to specifically enable it on their devices. This means that the invocation names are unique and that is something that the Google Actions Review team is ensuring along with other submission guidelines that Actions have to pass, to be approved on their ecosystem.
Building out a Conversational experience is not an easy task, and there is a design/experience element to creating a simple, natural conversation that makes it easy for the user to ask what he/she wants. It is recommended that developers go through the Design section on the Google Actions site to get some tips. From a development perspective, you can either use the Actions SDK or the tools that Google is making available to build these experiences (which aims to take care of a lot of the heavy lifting for you). One such tool is from API.AI, who google recently acquired. API.AI tooling provides a high level of abstraction by letting you define your voice intents, extract out the parameters, and then fulfill that request by invoking your backend code, which can be hosted elsewhere.
Here is a great tutorial on how to build out a Conversation Action using API.AI. The Action behaves like a chef and suggests you recipes based on the ingredients that you have at home.
Google Actions is still in its infancy, but one thing that all developers are aware of and which makes it worthwhile to invest in the ecosystem is the wide reach of any Google product or service. Being an early adopter and releasing Actions early may pay dividends for developers.