Have you built an Action for the Google Assistant and wondered how many people are using it? Or how many of your users are returning users? In this blog post, we will dive into 5 improvements that the Actions on Google Console team has made to give you more insight into how your Action is being used.
We've updated three areas of the Actions Console for readability: Active Users Chart, Date Range Selection, and Filter Options. With these new updates, you can now better customize the data to analyze the usage of your Actions.
The labels at the top of the Active Users chart now read Daily, Weekly and Monthly, instead of the previous 1 Day, 7 Days and 28 Days labels. We also improved the readability of the individual date labels at the bottom of the chart to be more clear. You’ll also notice a quick insight at the bottom of the chart that shows the unique number of users during this time period.
Previously, the date range selectors applied globally to all the charts. These selectors are now local to each chart, allowing you more control over how you view your data.
The date selector provides the following ranges:
Previously when you added a filter, it was applied to all the charts on the page. Now, the filters apply only to the chart you're viewing. We’ve also enhanced the filtering options available for the ‘Surface’ filter, such as mobile devices, smart speakers, and smart display.
Before:
After:
The filter feature also lets you show data breakdowns over different dimensions. By default, the chart shows a single consolidated line, a result of all the filters applied. You can now select the ‘Show breakdown by’ option to see how the components of that data contribute to the totals based on the dimension you selected.
A brand new addition to analytics is the introduction of a retention metrics chart to help you understand how well your action is retaining users. This chart shows you how many users you had in a week and how many returned each week for up to 5 weeks. The higher the percentage week after week, the better your retention.
When you hover over each cell in the chart, you can see the exact number of users who have returned for that week from the previous week.
To learn more about what each metric means, you can check out our documentation.
Try out these new improvements to see how your Actions are performing with your users. You can also check out our documentation to learn more. Let us know if you have any feedback or suggestions in terms of metrics that you need to improve your Action. Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.
Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!
Posted by Tomer Amarilio, Product Manager, Google Assistant
Headphones were one of the first devices optimized for the Google Assistant. With just your voice, you can ask the Assistant to make calls to friends or skip to the next song when you’re commuting on the subway to work or biking around on the weekend without having to always glance at your phone.
But as wireless Bluetooth devices like headphones and earbuds become more popular, we need to make it easier to have the same great Assistant experience across many headsets. We collaborated with Qualcomm to design a comprehensive, customizable development kit to provide all device makers with the building blocks needed to create a smart headset with the Google Assistant. The new Qualcomm Smart Headset Development Kit for the Google Assistant is powered by Qualcomm’s QCC5100-series Bluetooth audio chip and supports Google Fast Pair to make pairing Bluetooth accessories a hassle-free process.
To inspire device makers, we also built a Qualcomm Smart Headset Reference Design which delivers high quality audio, noise cancellation capabilities, and supports extended battery life and playback time. The reference design includes a push button to activate the Assistant and is just an example of what manufacturers can engineer.
Posted by Chris Turkstra, Director, Actions on Google
People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.
At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.
Help people with their “how to” questions
Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.
Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.
Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:
For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.
Check out how REI is getting extra mileage out of their YouTube video:
How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.
Help people quickly get things done with App Actions
If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.
Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.
If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we're splitting the check. I can say "Hey Google, send $15 to Chad on PayPal" and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.
Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.
Take advantage of Smart Displays’ interactive screens
Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.
Today, we’re introducing a developer preview of Interactive Canvas which lets you create full-screen experiences that combine the power of voice, visuals and touch. Canvas works across Smart Displays and Android phones, and it uses open web technologies you’re likely already familiar with, like HTML, CSS and Javascript.
Here’s an example of what you can build when you can leverage the full screen of a Smart Display:
Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.
Enable smart home devices to communicate locally
There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.
Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.
Make setup more seamless
And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.
Make your devices smart with Assistant Connect
Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We've been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can't wait to show you more about Assistant Connect later this year.
New device types and traits
For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.
Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.
If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.
We can’t wait to build together with you!
Posted by Mary Chen, Strategy Lead, Actions on Google
This year at Google I/O, the Actions on Google team is sharing new ways developers of all types can use the Assistant to help users get things done. Whether you’re making Android apps, websites, web content, Actions, or IoT devices, you’ll see how the Assistant can help you engage with users in natural and conversational ways.
Tune in to our announcements during the developer keynote, and then dive deeper with our technical talks. We listed the talks out below by area of interest. Make sure to bookmark them and reserve your seat if you’re attending live, or check back for livestream details if you’re joining us online.
In addition to these sessions, stay tuned for interactive demos and codelabs that you can try at I/O and at home. Follow @ActionsOnGoogle for updates and highlights before, during, and after the festivities.
See you soon!
Posted by Brad Abrams, Group Product Manager, Actions on Google
Before we look forward and discuss updates to Actions on Google for 2019, we wanted to recognize our global developer community for your tremendous work in 2018. We saw more than 4 times the number of projects created with Actions on Google this past year. And some of the most popular Action categories include Games and Trivia, Home Control, Music, Actions for Families, and Education – well done!
We hope to carry this enthusiasm forward, and at Mobile World Congress, we're announcing new tools so you can reach and engage with more people around the globe.
The Google Assistant's now available in more than 80 countries in nearly 30 languages, and you've been busy making your Actions accessible in many of those locales.
One of the most exciting things we've seen in the last couple of years is happening in places where the next billion users are coming online for the first time. In these fast-growing countries like India, Indonesia, Brazil, and Mexico, voice is often the primary way users interact with their devices because it's natural, universal, and the most accessible input method for people who are starting to engage with technology for the first time in their lives.
As more countries are coming online, we want to make it so you can reach and engage with these users as they're adopting the Google Assistant into their everyday lives with astonishing ease. There are tens of millions of users on Android Go and KaiOS in over 100 countries.
We'll be making your Actions available to Android Go and KaiOS devices in the next few months, so you should start thinking now about how to build for these platforms and users. Without any additional work required, your Actions will work on both operating systems at launch (unless of course, Action requires a screen with touch input). We'll also be launching a simulator so you can test your Actions to see how they look on entry-level Android Go smartphones and KaiOS feature phones.
A couple of partners have already built Actions with these new audiences in mind. Hello English, for example, created an Action to offer English lessons for users that speak Hindi, to create more opportunities for people through language learning. And Where is My Train? (WIMT) was built for the millions of Indians commuting daily, offering real-time locations and times for trains accessible by voice. Check out our developer docs for KaiOS and Android Go Edition, and start building for the next billion users.
And we're not just focused on a handful of emerging countries. We're always working to enable all of Actions on Google's tools so users can enjoy the best experience possible regardless of the country they live in or the language they speak—our work here never ends! Here's a snapshot of some of the progress we've made this past year:
We've already talked about how busy the development community was this past year, and we've been hard at work to keep up! If you're looking to reach and engage with millions—even billions more users—now's a good time to start thinking about how your Action can make a difference in people's lives around the globe.
Posted by Ilya Gelfenbeyn, Head of the Google Assistant Investments program
Last year, we announced the Google Assistant Investments program with the goal to help pioneering startups bring their ideas to life in the digital assistant ecosystem. Not only have we invested in some really great startups, we've also been working closely with these companies to make their services available to more users.
We're excited to be back to announce five new portfolio companies and catch up on the progress some of them have made this past year. With the next batch of investments, we're helping companies explore how digital assistants can improve the hospitality, insurance, fashion and education industries, and we have something for sports fans too.
First up, AskPorter. This London-based team was founded to make managing spaces simple, providing every property manager and occupant with a digital personal assistant. AskPorter is an AI-powered property management platform with a digital assistant called Porter. Porter assists and takes care of all aspects of property management such as guiding inspections arranging viewings, troubleshooting maintenance issues and chasing payments.
GradeSlam is an on-demand, chat-based, personalized learning and tutoring service available across all subject areas. Sessions are conducted via chat, creating a learning environment that allows students to interact freely and personally with qualified educators. The Montreal-based team is already used by more than 150,000 students, teachers and administrators.
Aiva Health puts smart speakers in hospitals and senior communities to reduce response times and improve satisfaction for patients, seniors, and caregivers alike. Aiva understands patient requests and routes them to the most appropriate caregiver so they can respond instantly via their mobile app. The Aiva platform provides centralized IoT management, powering Smart Hospitals and Smart Communities.
StyleHacks (formerly Maison Me) was founded with a goal of empowering people to take back control of their style and wardrobe. With a conversational interface and personalized AI-powered recommendations, they're helping people live their most stylish lives. The team has already launched the "StyleHacks" Action for phones and Smart Displays in December 2018, helping people decide what to wear by providing personalized recommendations based on the weather and preferences. And in the next few months, StyleHacks will also be able to help you shop for clothes you will actually wear. Just ask StyleHacks what to wear today
StatMuse turns the biggest sports stars into your own personal sports commentator. Powered by the personalities of more than 25 sports superstars including Peyton Manning, Jerry Rice and Scott Van Pelt, fans can get scores, stats and recaps for the NBA, NFL, NHL and MLB dating back to 1876. To try it out, just say, "Hey Google, talk to StatMuse."
It's been almost a year since we launched the Investments program and we're happy to see how some of these companies are already using voice to broaden the Google Assistant's capabilities. If you're working on new ways for people to use their voice to get things done, or building new hardware devices for digital assistants, we'd like to hear from you.
Posted by Julia Chen Davidson, Head of Partner Marketing, Google Home
We recently launched the Google Home Hub, the first ever Made by Google smart speaker with a screen, and we knew that a lot of you would want to put these helpful devices in the kitchen—perhaps the most productive room in the house. With the Google Assistant built-in to the Home Hub, you can use your voice—or your hands—to multitask during meal time. You can manage your shopping list, map out your family calendar, create reminders for the week, and even help your kids out with their homework.
To make the Google Assistant on the Home Hub even more helpful in the kitchen, we partnered with BuzzFeed's Tasty, the largest social food network in the world, to bring 2,000 of their step-by-step tutorials to the Assistant, adding to the tens of thousands of recipes already available. With Tasty on the Home Hub, you can search for recipes based on the ingredients you have in the pantry, your dietary restrictions, cuisine preferences and more. And once you find the right recipe, Tasty will walk you through each recipe with instructional videos and step-by-step guidance.
Tasty's Action shows off how brands can combine voice with visuals to create next-generation experiences for our smart homes. We asked Sami Simon, Product Manager for BuzzFeed Media Brands, a few questions about building for the Google Assistant and we hope you'll find some inspiration for how you can combine voice and touch for the new category of devices in our homes.
What additive value do you see for your users by building an Action for the Google Assistant that's different from an app or YouTube video series, for example?
We all know that feeling when you have your hands in a bowl of ground meat and you realize you have to tap the app to go to the next step or unpause the YouTube video you were watching (I can attest to random food smudges all over my phone and computer for this very reason!).
With our Action, people can use the Google Assistant to get a helping hand while cooking, navigating a Tasty recipe just by using their voice. Without having to break the flow of rolling out dough or chopping an onion, we can now guide people on what to expect next in their cooking process. What's more, with the Google Home Hub, which has the added bonus of a display screen, home chefs can also quickly glance at the video instructions for extra guidance.
The Google Home Hub gives users all of Google, in their home, at a glance. What advantages do you see for Tasty in being a part of voice-enabled devices in the home?
The Assistant on the Google Home Hub enhances the Tasty experience in the kitchen, making it easier than ever for home chefs to cook Tasty recipes, either by utilizing voice commands or the screen display. Tasty is already the centerpiece of the kitchen, and with the Google Home Hub integration, we have the opportunity to provide additional value to our audience. For instance, we've introduced features like Clean Out My Fridge where users share their available ingredients and Tasty recommends what to cook. We're so excited that we can seamlessly provide inspiration and coaching to all home chefs and make cooking even more accessible.
How do you think these new devices will shape the future of digital assistance? How did you think through when to use voice and visual components in your Action?
In our day-to-day lives, we don't necessarily think critically about the best way to receive information in a given instance, but this project challenged us to create the optimal cooking experience. Ultimately we designed the Action to be voice-first to harness the power of the Assistant.
We then layered in the supplemental visuals to make the cooking experience even easier and make searching our recipe catalogue more fun. For instance, if you're busy stir frying, all the pertinent information would be read aloud to you, and if you wanted to quickly check what this might look like, we also provide the visual as additional guidance.
Can you elaborate on 1-3 key findings that your team discovered while testing the Action for the Home Hub?
Tasty's lens on cooking is to provide a fun and accessible experience in the kitchen, which we wanted to have come across with the Action. We developed a personality profile for Tasty with the mission of connecting with chefs of all levels, which served as a guide for making decisions about the Action. For instance, once we defined the voice of Tasty, we knew how to keep the dialogue conversational in order to better resonate with our audience.
Additionally, while most people have had some experience with digital assistants, their knowledge of how assistants work and ways that they use them vary wildly from person to person. When we did user testing, we realized that unlike designing UX for a website, there weren't as many common design patterns we could rely on. Keeping this in mind helped us to continuously ensure that our user paths were as clear as possible and that we always provided users support if they got lost or confused.
What are you most excited about for the future of digital assistance and branded experiences there? Where do you foresee this ecosystem going?
I'm really excited for people to discover more use cases we haven't even dreamed of yet. We've thoroughly explored practical applications of the Assistant, so I'm eager to see how we can develop more creative Actions and evolve how we think about digital assistants. As the Assistant will only get smarter and better at predicting people's behavior, I'm looking forward to seeing the growth of helpful and innovative Actions, and applying those to Tasty's mission to make cooking even more accessible.
What's next for Tasty and your Action? What additional opportunities do you foresee for your brand in digital assistance or conversational interfaces?
We are proud of how our Action leverages the Google Assistant to enhance the cooking experience for our audience, and excited for how we can evolve the feature set in the future. The Tasty brand has evolved its videos beyond our popular top-down recipe format. It would be an awesome opportunity to expand our Action to incorporate the full breadth of the Tasty brand, such as our creative long-form programming or extended cooking tutorials, so we can continue helping people feel more comfortable in the kitchen.
To check out Tasty's Action yourself, just say "Hey Google, ask Tasty what I should make for dinner" on your Home Hub or Smart Display. And to learn more about the solutions we have for businesses, take a look at our Assistant Business site to get started building for the Google Assistant.
If you don't have the resources to build in-house, you can also work with our talented partners that have already built Actions for all types of use cases. To make it even easier to find the perfect partner, we recently launched a new website that shows these agencies on a map with more details about how to get in touch. And if you're an agency already building Actions, we'd love to hear from you. Just reach out here and we'll see if we can offer some help along the way!
Posted by Mikhail Turilin, Product Manager, Actions on Google
Building engaging Actions for the Google Assistant is just the first step in your journey for delivering a great experience for your users. We also understand how important it is for many of you to get compensated for your hard work by enabling quick, hands-free transactional experiences through the Google Assistant.
Let's take a look at some of the best practices you should consider when adding transactions to your Actions!
Traditional account linking requires the user to open a web browser and manually log in to a merchant's website. This can lead to higher abandonment rates for a couple of reasons:
Our new Google Sign-In for the Assistant flow solves this problem. By implementing this authentication flow, your users will only need to tap twice on the screen to link their accounts or create a new account on your website. Connecting individual user profiles to your Actions gives you an opportunity to personalize your customer experience based on your existing relationship with a user.
And if you already have a loyalty program in place, users can accrue points and access discounts with account linking with OAuth and Google Sign-In.
Head over to our step-by-step guide to learn how to incorporate Google Sign-In.
Most people prefer to use the Google Assistant quickly, whether they're at home and or on the go. So if you're a merchant, you should look for opportunities to simplify the ordering process.
Choosing a product from a list of many dozens of items takes a really long time. That's why many consumers enjoy the ability to quickly reorder items when shopping online. Implementing reordering with Google Assistant provides an opportunity to solve both problems at the same time.
Reordering is based on the history to previous purchases. You will need to implement account linking to identify returning users. Once the account is linked, connect the order history on your backend and present the choices to the user.
Just Eat, an online food ordering and delivery service in the UK, focuses on reordering as one of their core flows because they expect their customers to use the Google Assistant to reorder their favorite meals.
Once a user has decided they're ready to make a purchase, it's important to provide a quick checkout experience. To help, we've expanded payment options for transactions to include Google Pay, a fast, simple way to pay online, in stores, and in the Google Assistant.
Google Pay reduces customer friction during checkout because it's already connected to users' Google accounts. Users don't need to go back and forth between the Google Assistant and your website to add a payment method. Instead, users can share the payment method that they have on file with Google Pay.
Best of all, it's simple to integrate – just follow the instructions in our transactions docs.
At I/O, we announced that voice-only transactions for Google Home are now supported in the US, UK, Canada, Germany, France, Australia, and Japan. A completely hands-free experience will give users more ways to complete transactions with your Actions.
Here are a few things to keep in mind when designing your transactions for voice-only surfaces:
Learn more tips in our Conversation Design Guidelines.
As we expand support for transactions in new countries and on new Google Assistant surfaces, now is the perfect time to make sure your transactional experiences are designed with users in mind so you can increase conversion and minimize drop-off.
Posted by Tarun Jain, Group PM, Actions on Google
The Google Assistant helps you get things done across the devices you have at your side throughout your day--a bedside smart speaker, your mobile device while on the go, or even your kitchen Smart Display when winding down in the evening.
One of the common questions we get from developers is: how do I create a seamless path for users to complete purchases across all these types of devices? We also get asked by developers: how can I better personalize my experience for users on the Assistant with privacy in mind?
Today, we're making these easier for developers with support for digital goods and subscriptions, and Google Sign-in for the Assistant. We're also giving the Google Assistant a complete makeover on mobile phones, enabling developers to create even more visually rich integrations.
While we've offered transactions for physical goods for some time, starting today, you will also be able to offer digital goods, including one time purchases like upgrades--expansion packs or new levels, for example--and even recurring subscriptions directly within your Action.
Starting today, users can complete these transactions while in conversation with your Action through speakers, phones, and Smart Displays.This will be supported in the U.S. to start, with more locales coming soon.
Headspace, for example, now offers Android users an option to subscribe to their plans, meaning users can purchase a subscription and immediately see an upgraded experience while talking to their Action. Try it for yourself, by telling your Google Assistant, "meditate with Headspace"
Volley added digital goods to their role-playing game Castle Master so users could enhance their experience by purchasing upgrades. Try it yourself, by asking your Google Assistant to, "play Castle Master."
You can also ensure a seamless premium experience as users move between your Android app and Action for Assistant by letting users access their digital goods across their relationship with you, regardless of where the purchase originated. You can manage your digital goods for both your app and your Action in one place, in the Play Console.
Once your users have access to a premium experience with digital goods, you will want to make sure your Action remembers them. To help with that, we're also introducing Google Sign-In for the Assistant, a secure authentication method that simplifies account linking for your users and reduces user drop off for login. Google Sign-In provides the most convenient way to log in, with just a few taps. With Google Sign-In users can even just use their voice to login and link accounts on smart speakers with the Assistant.
In the past, account linking could be a frustrating experience for your users; having to manually type a username and password--or worse, create a new account--breaks the natural conversational flow. With Google Sign-In, users can now create a new account with just a tap or confirmation through their voice. Most users can even link to their existing accounts with your service using their verified email address.
For developers, Google Sign-In also makes it easier to support login and personalize your Action for users. Previously, developers needed to build an account system and support OAuth-based account linking in order to personalize their Action. Now, you have the option to use Google Sign-In to support login for any user with a Google account.
Starbucks added Google Sign-In for the Assistant to enable users of their Action to access their Starbucks RewardsTM accounts and earn stars for their purchases. Since adding Google Sign-In for the Assistant, they've seen login conversion nearly double for their users versus their previous implementation that required manual account entry.
Check out our guide on the different authentication options available to you, to understand which best meets your needs.
Today, we're launching the first major makeover for the Google Assistant on phones, bringing a richer, more interactive interface to the devices we carry with us throughout the day.
Since the Google Assistant made its debut, we've noticed that nearly half of all interactions with the Assistant today include both voice and touch. With this redesign, we're making the Assistant more visually assistive for users, combining voice with touch in a way that gives users the right controls in the right situations.
For developers, we've also made it easy to bring great multimodal experiences to life on the phone and other Assistant-enabled devices with screens, including Smart Displays. This presents a new opportunity to express your brand through richer visuals and with greater real estate in your Action.
To get started, you can now add rich responses to customize your Action for visual interfaces. With rich responses you can build visually engaging Actions for your users with a set of plug-and-play visual components for different types of content. If you've already added rich responses to your Action, these will work automatically on the new mobile redesign. Be sure to also check out our guidance on how and when to use visuals in your Action.
Below you can find some examples of the ways some partners and developers have already started to make use of rich responses to provide more visually interactive experiences for Assistant users on phones.
You can try these yourself by asking your Google Assistant to, "order my usual from Starbucks," "ask H&M Home to give inspiration for my kitchen," "ask Fitstar to workout," or "ask Food Network for chicken recipes."
Ready to get building? Check out our documentation on how to add digital goods and Google Sign-In for Assistant to create premium and personalized experiences for your users across devices.
To improve your visual experience for phone users, check out our conversation design site, our documentation on different surfaces, and our documentation and sample on how you can use rich responses to build with visual components. You can also test and debug your different types of voice, visual, and multimodal experiences in the Actions simulator.
Good luck building, and please continue to share your ideas and feedback with us. Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.
Posted by Saba Zaidi, Senior Interaction Designer, Google Assistant
Earlier this year we announced Smart Displays, a new category of devices with the Google Assistant built in, that augment voice experiences with immersive visuals. These new, highly visual devices can make it easier to convey complex information, suggest Actions, support transactions, and express your brand. Starting today, Smart Displays are available for purchase in major US retailers, both in-store and online.
Interacting through voice is fast and easy, because speaking comes naturally to people, and language doesn't constrain people to predefined paths, unlike traditional visual interfaces. However in audio-only interfaces, it can be difficult to communicate detailed information like lists or tables, and nearly impossible to represent rich content like images, charts or a visual brand identity. Smart Displays allow you to create Actions for the Assistant that can respond to natural conversation, and also display information and represent your brand in an immersive, visual way.
Today we're announcing consumer availability of rich responses optimized for Smart Displays. With rich responses, developers can use basic cards, lists, tables, carousels and suggestion chips, which give you an array of visual interactions for your Action, with more visual components coming soon. In addition, developers can also create custom themes to more deeply customize your Action's look and feel.
If you've already built a voice-centric Action for the Google Assistant, not to worry, it'll work automatically on Smart Displays. But we highly recommend adding rich responses and custom themes to make your Action even more visually engaging and useful to your users on Smart Displays. Here are a few tips to get you started:
Smart Displays offer several visual formats for displaying information and facilitating user input. A carousel of images, a list or a table can help users scan information efficiently and then interact with a quick tap or swipe.
For example, consider a long, spoken prompt like: "Welcome to National Anthems! You can play the national anthems from 20 different countries, including the United States, Canada and the United Kingdom. Which would you like to hear?"
Instead of merely showing the transcript of that whole spoken prompt on the screen, a carousel of country flags makes it easy for users to scroll and tap the anthem they want to hear.
Suggestion chips are a great way to surface recommendations, aid feature discovery and keep the conversation moving on Smart Displays.
In this example, suggestion chips can help users find the "surprise me" feature, find the most popular anthems, or filter anthems by region.
You can take advantage of new custom themes to differentiate your experience and represent your brand's persona, choosing a custom voice, background image or color, font style, or the shape of your cards to match your branding.
For example, an Action like California Surf Report, could be themed in a more immersive and customized way.
We offer more tips on designing and building for Smart Displays and other visual devices in our conversation design site and in our talk from I/O about how to design Actions across devices.
Then head to our documentation to learn how to customize the visual appearance of your Actions with rich responses. You can also test and tinker with customizations for Smart Displays in the Actions Console simulator.
Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.
We can't wait to see—quite literally—what you build next! Thanks for being a part of our community, and as always, if you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.
*Some countries are not eligible to participate in the developer community program, please review the terms and conditions
Though it's been just a few short weeks since we released a new set of features for Actions on Google, we're kicking off our presence at South by Southwest (SXSW) with a few more updates for you.
SXSW brings together creatives interested in fusing marketing and technology together, and what better way to start the festival than with new features that enable you to be more creative, and to build new type of Actions that help your users get more things done.
This past year, we've heard from many developers who want to offer great media experiences as part of their Actions. While you can already make your podcasts discoverable to Assistant users, our new media response API allows you to develop deeper, more-engaging audio-focused conversational Actions that include, for example, clips from TV shows, interactive stories, meditation, relaxing sounds, and news briefings.
Your users can control this audio playback on voice-activated speakers like Google Home, Android phones, and more devices coming soon. On Android phones, they can even use the controls on their phone's notification area and lock screen.
Some developers who are already using our new media response API include The Daily Show, Calm, and CNBC.
To get started using our media response API, head over to our documentation to learn more.
And if your content is more visual than audio-based, we're also introducing a browse carousel for your Actions that allows you to show browsable content -- e.g., products, recipes, places -- with a visual experience that users can simply scroll through, left to right. See an example of how this would look to your users, below, then learn more about our browse carousel here.
While having a great user experience is important, we also want to ensure you have the right tools to re-engage your users so they keep coming back to the experience you've built. To that end, a few months ago, we introduced daily updates and push notifications as a developer preview.
Starting today, your users will have access to this feature. Esquire is already using it to send daily "wisdom tips", Forbes sends a quote of the day, and SpeedyBit sends daily updates of cryptocurrency prices to keep them in the know on market fluctuations.
As soon as you submit your Action for review with daily updates or push notifications enabled, and it's approved, your users will be able to opt into this re-engagement channel. Learn more in our docs.
Actions for Google now allows you to access digital purchases (including paid app purchases, in-app purchases, and in-app subscriptions) that your users make from your Android app. By doing so, you can recognize when you're interacting with a user who's paid for a premium experience on your Android app, and similarly serve that experience in your Action, across devices.
And the best part? This is all done behind the scenes, so the user doesn't need to take any additional steps, like signing in, for you to provide this experience. Economist Espresso, for example, now knows when a user has already paid for a subscription with Google Play, and then offers an upgraded experience to the same user through their Action.
In December of last year we announced the addition of Built-in Device Actions to the Google Assistant SDK for devices. This feature allows developers to extend any Google Assistant that is embedded in their device using traits and grammars that are maintained by Google and are largely focused on home automation. For example "turn on", "turn off" and "turn the temperature down".
Today we're announcing the addition of Custom Device Actions which are more flexible Device Actions, allowing developers to specify any grammar and command to be executed by their device. Once you build these Custom Device Actions, users will be able to activate specific capabilities through the Google Assistant. This leads to more natural ways in which users interact with their Assistant-enabled devices, including the ability to utilize more specific device capabilities.
"Ok Google, turn on the oven"
"Ok, turning on the oven"
"Ok Google, set the oven to convection and preheat to 350 degrees"
"Ok, setting the oven to convection and preheating to 350 degrees"
To give you a sense of how this might work in the real world, check out a prototype, Talk to the Light from the talented Red Paper Heart team, that shows a zany use of this functionality. Then, check out our documentation to learn more about how you can start building these for your devices. We've provided a technical case study from Red Paper Heart and their code repository in case you want to see how they built it.
In addition to Custom Device Actions, we've also integrated device registration into the Actions on Google console, allowing developers to get up and running more quickly. To get started checkout the latest documentation and console.
Similarly, we teamed up with a few cutting-edge teams to explore the creative potential of the Actions on Google platform. Following the Voice experiments the Google Creative Lab released a few months ago, these teams released four new experiments:
The code for all of these Actions is open source and is accompanied by in-depth technical case studies from each team that shares their learnings when developing Actions.
Ready to build? Take a look at our three new case studies with KLM Royal Dutch Airlines, Domino's, and Ticketmaster. Learn about their development journey with Dialogflow and how the Actions they built help them stay ahead of the conversational technology curve, be where their customers are, and assist throughout the entire user journey:
We hope these updates get your creative juices flowing and inspire you to build even more Actions and embed the Google Assistant on more devices. Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit. Thanks for being a part of our community, and as always, if you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.
While Actions on the Google Assistant are available to users on more than 400 million devices, we're focused on expanding the availability of the developer platform even further. At Mobile World Congress, we're sharing some good news for our international developer community.
Starting today, you can build Actions for the Google Assistant in seven new languages:
These new additions join English, French, German, Japanese, Korean, Spanish, Brazilian Portuguese, Italian and Russian. That brings our total count of supported languages to 16! You can develop for all of them using Dialogflow and its natural language processing capabilities, or directly with the Actions SDK. And we're not stopping here–expect more languages to be added later this year.
If you localize your apps in these new languages you won't just be among the first Actions available in the new locales, you'll also earn rewards while you do it! And if you're new to Actions on Google, check out our community program* to learn how you can snag an exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit by publishing your first Action. Already we've seen partners take advantage of other languages we've launched in the past like Bring!, which is now available in both English and German.
Besides supporting new languages, we're also making it easier to build your Action for global audiences. First, we recently added support for building with templates—creating an Action by filling in a Google Sheet without a single line of code—for French, German, and Japanese. For example, TF1 built Téléfoot, using templates in French to create an engaging World Cup-themed trivia game with famous commentators included as sound effects.
Additionally, we've made it a little easier for you to localize your Actions into different languages by enabling you to export your directory listing information as a file. With the file in hand, you can translate offline and upload the translations to your console, making localization quicker and more organized.
But before you run off and start building Actions in new languages, take a quick tour of some of the useful developer features rolling out this week…
By the end of the year the Assistant will reach 95 percent of all eligible Android phones worldwide, and Actions are a great way for you to reach those users to help them get things done easily over voice. Sometimes, however, users may benefit from the versatility of your Android app for particularly complex or highly interactive tasks.
So today, we're introducing a new feature that lets you deep link from your Actions in the Google Assistant to a specific intent in your Android app. Here's an example of SpotHero linking from their Action to their Android app after a user purchased a parking reservation. The Android app allows the user to see more details about the reservation or redeem their spot.
As you integrate these links in your Action, you'll make it easier for your users to find what they're looking for and to move seamlessly to your Android app to complete their user journey. This new feature will roll out over the coming weeks, but you can check out our developer documentation for more information on how to get started.
We're also introducing askForPlace, a new conversation helper that integrates the Google Places API to enable developers to use the Google Assistant to understand location-based user queries mid-conversation. Using the new helper, the Assistant leverages Google Maps' location and points of interest (POI) expertise to provide fast, accurate places for all your users' location queries. Once the location details have been sorted out with the user, the Assistant returns the conversation back to your Action so the user can finish the interaction.
So whether your business specializes in delivering a beautiful bouquet of flowers or a piping hot pepperoni pizza, you no longer need to spend time designing models for gathering users' location requests, instead you can focus on your Action's core experience.
Let's take a look at an example of how Uber uses the askForPlace helper to help their users book a ride:
We joined halfway through the interaction above, but it's worth pointing out that once the Uber action asked the user "Where would you like to go?" the developer triggered the askForPlace helper to handle location disambiguation. The user is still speaking with Uber, but the Assistant handled all user inputs on the back end until a drop-off location was resolved. From there, Uber was able to wrap up the interaction and dispatch a driver.
Head over to the askForPlace docs to learn how to create a better user experience for your customers.
And to wrap up our new feature announcements, today we're introducing an improved experience for users who use your app regularly—without any work required on your end. Specifically, if users consistently come back to your app, we'll cut back on the introductory lead-in to get users into your Actions as quickly as possible.
Today's updates are part of our commitment to improving the platform for developers, and making the Google Assistant and Actions on Google more widely available around the globe. If you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.