Posted by Cathy Pearl, Head of Conversation Design Outreach Illustrations by Kimberly Harvey
Hi all! I'm Cathy Pearl, head of conversation design outreach at Google. I've been building conversational systems for a while now, starting with IVRs (phone systems) and moving on to multi-modal experiences. I'm also the author of the O'Reilly book Designing Voice User Interfaces. These days, I'm keen to introduce designers and developers to our conversation design best practices so that Actions will provide the best possible user experience. Today, I'll be talking about a fundamental first step when thinking about creating an Action: writing sample dialogs.
So, you've got a cool idea for Actions on Google you want to build. You've brushed up on Dialogflow, done some codelabs, and figured out which APIs you want to use. You're ready to start coding, right?
Not so fast!
Creating an Action always needs to start with designing an Action. Don't panic; it's not going to slow you down. Planning out the design first will save you time and headaches later, and ultimately produces a better, more usable experience.
In this post, I'll talk about the first and most important component for designing a good conversational system: sample dialogs. Sample dialogs are potential conversational paths a user might take while conversing with your Action. They look a lot like film scripts, with dialog exchanges between your Action and the user. (And, like film scripts, they should be read aloud!) Writing sample dialogs comes before writing code, and even before creating flows.
When I talk to people about the importance of sample dialogs, I get a lot of nods and agreement. But when I go back later and say, "Hey, show me your sample dialogs," I often get a sheepish smile and an excuse as to why they weren't written. Common ones include:
First off, there is a misconception that "conversation design" (or voice user interface design) is just the top layer of the experience: the words, and perhaps the order of words, that the user will see/hear.
But conversation design goes much deeper. It drives the underlying structure of the experience, which includes:
In the end, these things manifest as words, to be sure. But thinking of them as "stuff you worry about later" will set you up for failure when it comes time for your user to interact with your Action. For example, without a sample dialog, you might not realize that your prompts all start with the word "Next", making them sound robotic and stilted. Sample dialogs will also show you where you need "glue" words such as "first" and "by the way".
Google has put together design guidelines for building conversational systems. They include an introduction to sample dialogs and why they're important:
Sample dialogs will give you a quick, low-fidelity sense of the "sound-and-feel" of the interaction you're designing. They convey the flow that the user will actually experience, without the technical distractions of code notation, complex flow diagrams, recognition-grammar issues, etc.
By writing sample dialogs, you can informally experiment with and evaluate different design strategies, such as how to promote the discoverability of new features or how to confirm a user's request (for example: should you use an implicit confirmation, an explicit confirmation, or no confirmation at all?).
Check out the Google I/O 2018 Action sample dialogs to see an example. (You can also take a look at the Google I/O 2018 Action code.)
Still not sure if you really need them? Let's hear from a developer who works on Actions, Jessica Dene Earley-Cha, who said in her recent Medium post:
Let's cover how this was built. Before any coding can happen, we need to build a Conversational Design. I originally had skipped this step because I thought that I could build the structure first and then fill in the content (like building a website). However, the structure is tied in with the content. I realized this when I was hitting walls that I thought were technical, but they were there because I didn't have a design.
She makes the great point that designing for conversational systems is different than designing for the web. With a conversational interface, the content itself is part of the structure, so design becomes even more important.
So now that you're (hopefully) convinced, let's discuss four of the common pitfalls developers can avoid by using sample dialogs:
Suppose you're writing an Action for a bike shop: it can make repair and service appointments, give store hours, and list the latest deals. It's tempting to just start listing out options so the user will know everything they can do. Let's see what a sample dialog looks like using that strategy:
USERHey Google, talk to Alyssa's Bike Shop.
ACTIONHi! Welcome to Alyssa's Bike Shop. Would you like to make an appointment for a repair, cancel an appointment, hear store hours, hear this week's deals, or hear next month's deals?
If you read this prompt out loud to someone else, it will quickly become apparent that too much information is being presented. Humans have a limited capacity for taking in audio, especially if it's the first time they're hearing it.
Here is a better way:
ACTIONHi! Welcome to Alyssa's Bike Shop. I can help you make or cancel an appointment, get store hours, or tell you the latest deals. Which would you like?
Pro tip: an even better experience would be to leave out the "cancel" option if the user doesn't have any.
Here's a sample dialog representing a common mistake in many conversational experiences. In this example, the user is returning to an Action they've tried previously:
USERHey Google, talk to Cathy's Astronomy Trivia Game.
ACTIONThis is Cathy's Astronomy Trivia Game. Welcome back!
USERUhhh… what can I do??
When we become immersed in our design, we often forget that the user will be coming to the Action without the background we have. What they can do will not always be obvious.
Here's an improvement:
ACTIONThis is Cathy's Astronomy Trivia Game. You can play solo, or join a team. What would you like to do?
USERUh, I think I'll join a team.
Remember, you don't have to list all your options up front. Instead, try to guide the user through the process in a clear manner, and end with a call to action.
Another common issue is not making it obvious when it's the user's turn to speak. As humans, we have many ways to let someone know it's their turn: words, intonation, eye gaze, and body language. For things like the Assistant, it's important to end the turn with a clear question or instruction. Here's an example of an Action that leaves the user hanging:
USEROkay Google, talk to Cool Animal Facts.
ACTIONWelcome to Cool Animal Facts! What animal would you like to know about?
USERI love penguins!
ACTIONCool, penguins. Would you like to know about their habitat, or what they like to eat?
USEROh, what do they like to eat!
ACTIONPenguins can eat up to 13 pounds of fish, krill and squid a day! And did you know the fathers sit on the eggs?
USERYes, I saw…
ACTIONMeanwhile, the mothers go out to get fish.
USER(thinking) Wait.. am I supposed to say something?
In this example we see two problems. First, the Action asked a rhetorical question. When the user tried to answer, the Action ignored them and kept on talking.
The second problem is that after the fact was given, nothing happened. There was no indication of what the user could do next.
Here's a better way:
ACTIONPenguins can eat up to 13 pounds of fish, krill and squid a day! By the way, male penguins spend time sitting on the eggs, while the females go out to get fish. Now, would you like to hear more about penguins, or ask about another animal?
USERI'm done, thanks.
ACTIONSure, thanks for talking to Cool Animal Facts. Goodbye!
Pro tip: Users often want to have an explicit way out, such as saying "goodbye".
Pro tip 2: Make your grammar robust! For the question "Would you like to hear more about penguins, or ask about another animal?" be prepared to handle responses such as "yes", "no", and names of animals.
Gracefully handling the conversation when things go wrong is critical for any conversational system. Writing multiple error prompts for every turn in the conversation can seem daunting. Using a default "I'm sorry, I didn't understand" will not be helpful to the user. Instead, it should be based on what's been happening up until that point and where the user is currently. To make it easier, write sample dialogs for error conditions, which will make more clear what should be in the prompts.
USEROkay Google, talk to the Zombie Quiz.
ACTIONTime for the Zombie Quiz! First question: which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?
USER(thinking)
ACTIONSorry, I didn't get that. Which one do you choose?
USERWhat? I don't remember them...
ACTIONSorry, I'm still having trouble. Which one do you choose?
USERArgh!
ACTIONSorry, that's incorrect. Question two…
Context-sensitive error handling would have gotten the user back on track:
ACTIONSorry, I didn't get that. Which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?
USERUhh.. I'm not sure… let me see...maybe the..
ACTIONSorry, I still didn't get that. You can also say the number of the answer, like one, two, or three. Which one do you choose?
USEROh, number three.
ACTIONCorrect!
I know you're itching to take off and starting drawing flows and writing code, but take time to write sample dialogs first. In the long run, it will make your coding easier, and you'll have fewer bugs to fix.
Here's a list of "Dos" to keep in mind when writing sample dialogs:
Happy writing!
Posted by Rich Hyndman, Global Tech Lead, Google Launchpad
Launchpad Studio is an acceleration program for the world's top startups. Founders work closely with Google and Alphabet product teams and experts to solve specific technical challenges and optimize their businesses for growth with machine learning. Last year we introduced our first applied-ML cohort focused on healthcare.
Today, we are excited to welcome the new cohort of Finance startups selected to participate in Launchpad Studio:
These Studio startups have been invited from across nine countries and four continents to discuss how machine learning can be utilized for financial inclusion, stable currencies, and identification services. They are defining how ML and blockchain can supercharge efforts to include everyone and ensure greater prosperity for all. Together, data and user behavior are enabling a truly global economy with inclusive and differentiated products for banking, insurance, and credit.
Each startup is paired with a Google product manager to accelerate their product development, working alongside Google's ML research and development teams. Studio provides 1:1 mentoring and access to Google's people, network, thought leadership, and technology.
"Two of the biggest barriers to the large-scale adoption of cryptocurrencies as a means of payment are ease-of-use and purchasing-power volatility. When we heard about Studio and the opportunity to work with Google's AI teams, we were immediately excited as we believe the resulting work can be beneficial not just to Celo but for the industry as a whole." - Rene Reinsberg, Co-Founder and CEO of Celo
"Our technology has accelerated economic growth across Indonesia by raising the standard of living for millions of micro-entrepreneurs including ojek drivers, restaurant owners, small businesses and other professionals. We are very excited to work with Google, and explore more on how artificial intelligence and machine learning can help us strengthen our capabilities to drive even more positive social change not only to Indonesia, but also for the region." - Kevin Aluwi, Co-Founder and CIO of GO-JEK
"At Starling, we believe that data is the key to a healthy financial life. We are excited about the opportunity to work with Google to turn data into insights that will help consumers make better and more-informed financial decisions." - Anne Boden, Founder and CEO of Starling Bank
"At GuiaBolso, we use machine learning in different workstreams, but now we are doubling down on the technology to make our users' experience even more delightful. We see Studio as a way to speed that up." - Marcio Reis, CDO of GuiaBolso
Since launching in 2015, Google Developers Launchpad has become a global network of accelerators and partners with the shared mission of accelerating innovation that solves for the world's biggest challenges. Join us at one of our Regional Accelerators and follow Launchpad's applied ML best practices by subscribing to The Lever.
Posted by Malika Cantor, Program Manager for Launchpad
The Lever is Google Developers Launchpad's new resource for sharing applied-Machine Learning (ML) content to help startups innovate and thrive. In partnership with experts and leaders across Google and Alphabet, The Lever is operated by Launchpad, Google's global startup acceleration program. The Lever will publish the Launchpad community's experiences of integrating ML into products, and will include case studies, insights from mentors, and best practices from both Google and global thought leaders.
Peter Norvig, Google ML Research Director, and Cassie Kozyrkov, Google Cloud Chief Decision Scientist, are editors of the publication. Hear from them and other Googlers on the importance of developing and sharing applied ML product and business methodologies:
Peter Norvig (Google ML Research, Director): "The software industry has had 50 years to perfect a methodology of software development. In Machine Learning, we've only had a few years, so companies need to pay more attention to the process in order to create products that are reliable, up-to-date, have good accuracy, and are respectful of their customers' private data."
Cassie Kozyrkov (Chief Decision Scientist, Google Cloud): "We live in exciting times where the contributions of researchers have finally made it possible for non-experts to do amazing things with Artificial Intelligence. Now that anyone can stand on the shoulders of giants, process-oriented avenues of inquiry around how to best apply ML are coming to the forefront. Among these is decision intelligence engineering: a new approach to ML, focusing on how to discover opportunities and build towards safe, effective, and reliable solutions. The world is poised to make data more useful than ever before!"
Clemens Mewald (Lead, Machine Learning X and TensorFlow X): "ML/AI has had a profound impact in many areas, but I would argue that we're still very early in this journey. Many applications of ML are incremental improvements on existing features and products. Video recommendations are more relevant, ads have become more targeted and personalized. However, as Sundar said, AI is more profound than electricity (or fire). Electricity enabled modern technology, computing, and the internet. What new products will be enabled by ML/AI? I am convinced that the right ML product methodologies will help lead the way to magical products that have previously been unthinkable."
We invite you to follow the publication, and actively comment on our blog posts to share your own experience and insights.
Posted by Zachary Senzer, Product Manager
A couple months ago at Google I/O, we announced a redesigned Actions console that makes developing your Actions easier than ever before. The new Actions console features a more seamless development experience that tailors your workflow from onboarding through deployment, with tailored analytics to manage your Actions post-launch. Simply select your use case during onboarding and the Actions console will guide you through the different stages of development.
Here are 5 tips to help you create the best Actions for your content using our new console.
Part of what makes the Actions on Google ecosystem so special is the vast array of devices that people can use to interact with your Actions. Some of these devices, including phones and our new smart displays, allow users to have rich visual interactions with your content. To help your Actions stand out, you can customize how these visual experiences appear to users of these devices. Simply visit the "Build" tab and go to theme customization in the Actions console where you can specify background images, typography, colors, and more for your Actions.
Conversational experiences can introduce complexity in how people ask to complete a task related to your Action--a user could ask for a game in thousands of different ways ("play a game for me", "find a maps quiz", "I want some trivia"). Figuring out all of the ways a user might ask for your Action is difficult. To make this process much easier, we're beginning to map the ways users might ask for your Action into a taxonomy of built-in intents to abstract away this difficulty.
We'll start to use the built-in intent you associated with your Action to help users more easily discover your content as we begin testing them with user's queries. We'll continue to add many more built-in intents over the coming months to cover a variety of use cases. In the Actions console, go to the "Build" tab, click "Actions", then "Add Action" and select one to get started.
While we'll continue to improve the ways users find your Actions within the Assistant, we've also made it easier for users to find your Actions outside the Assistant. Driving new traffic to your Actions is as easy as a click with Action Links. You now have the ability to define hyperlinks for each of your Actions to be used on your website, social media, email newsletters, and more. These links will launch users directly into your Action. If used on a desktop, the link will take users to the directory page for your Action, where they'll have the ability to choose the device they want to try your Action on. To configure Action Links in the console, visit the "Build" tab, choose "Actions", and select the Action for which you would like to create a link. That's it!
The best way to make sure that your Actions are working as intended is to test them using our updated web simulator. In the simulator, you can run through conversational user flows on phone, speaker, and even smart display device types. After you issue a request, you can see the visual response, request, and response JSON, with any potential errors. For further assistance with debugging errors, you also have the ability to view logs for your Actions.
Another great opportunity to test your Actions is by deploying to limited audiences in alpha and beta environments. By deploying to the alpha environment, your Actions do not need to go through the review process, meaning you can quickly test with your users. After deploying to the beta environment, you can launch your Actions to production whenever you like without additional review. To use alpha and beta environments, go to the "Deploy" tab and click "Release" in the Actions console.
After you deploy your Actions, it's equally important to measure their performance. By visiting the "Measure" tab and clicking "Analytics" in the Actions console, you will be able to view rich analytics on usage, health, and discovery. You can easily see how many people are using and returning to your Actions, how many errors users are encountering, the phrases users are saying to discover your Actions, and much, much, more. These insights can help you improve your Actions.
If you're new to the Actions console and looking for a quick way to get started, watch this video for an overview of the development process.
We're so excited to see how you will use the new Actions console to create even more Actions for more use cases, with additional tools to improve and iterate. Happy building!
Posted by Saba Zaidi, Senior Interaction Designer, Google Assistant
Earlier this year we announced Smart Displays, a new category of devices with the Google Assistant built in, that augment voice experiences with immersive visuals. These new, highly visual devices can make it easier to convey complex information, suggest Actions, support transactions, and express your brand. Starting today, Smart Displays are available for purchase in major US retailers, both in-store and online.
Interacting through voice is fast and easy, because speaking comes naturally to people, and language doesn't constrain people to predefined paths, unlike traditional visual interfaces. However in audio-only interfaces, it can be difficult to communicate detailed information like lists or tables, and nearly impossible to represent rich content like images, charts or a visual brand identity. Smart Displays allow you to create Actions for the Assistant that can respond to natural conversation, and also display information and represent your brand in an immersive, visual way.
Today we're announcing consumer availability of rich responses optimized for Smart Displays. With rich responses, developers can use basic cards, lists, tables, carousels and suggestion chips, which give you an array of visual interactions for your Action, with more visual components coming soon. In addition, developers can also create custom themes to more deeply customize your Action's look and feel.
If you've already built a voice-centric Action for the Google Assistant, not to worry, it'll work automatically on Smart Displays. But we highly recommend adding rich responses and custom themes to make your Action even more visually engaging and useful to your users on Smart Displays. Here are a few tips to get you started:
Smart Displays offer several visual formats for displaying information and facilitating user input. A carousel of images, a list or a table can help users scan information efficiently and then interact with a quick tap or swipe.
For example, consider a long, spoken prompt like: "Welcome to National Anthems! You can play the national anthems from 20 different countries, including the United States, Canada and the United Kingdom. Which would you like to hear?"
Instead of merely showing the transcript of that whole spoken prompt on the screen, a carousel of country flags makes it easy for users to scroll and tap the anthem they want to hear.
Suggestion chips are a great way to surface recommendations, aid feature discovery and keep the conversation moving on Smart Displays.
In this example, suggestion chips can help users find the "surprise me" feature, find the most popular anthems, or filter anthems by region.
You can take advantage of new custom themes to differentiate your experience and represent your brand's persona, choosing a custom voice, background image or color, font style, or the shape of your cards to match your branding.
For example, an Action like California Surf Report, could be themed in a more immersive and customized way.
We offer more tips on designing and building for Smart Displays and other visual devices in our conversation design site and in our talk from I/O about how to design Actions across devices.
Then head to our documentation to learn how to customize the visual appearance of your Actions with rich responses. You can also test and tinker with customizations for Smart Displays in the Actions Console simulator.
Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.
We can't wait to see—quite literally—what you build next! Thanks for being a part of our community, and as always, if you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.
*Some countries are not eligible to participate in the developer community program, please review the terms and conditions
Posted by Billy Rutledge, Director of AIY Projects
Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.
The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.
The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.
The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.
On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.
Both devices will be available online this fall in the US with other countries to follow shortly.
For more product information visit g.co/aiy and sign up to be notified as products become available.
Posted by Mary Chen, Product Marketing Manager, and Ralfi Nahmias, Product Manager, Dialogflow
Today at Google Cloud Next '18, Dialogflow is introducing several new beta features to expand conversational capabilities for customer support and contact centers. Let's take a look at how three of these features can be used with the Google Assistant to improve the customer care experience for your Actions.
Building conversational Actions for content-heavy use cases, such as FAQ or knowledge base answers, is difficult. Such content is often dense and unstructured, making accurate intent modeling time-consuming and prone to error. Dialogflow's Knowledge Connectors feature simplifies the development process by understanding and automatically curating questions and responses from the content you provide. It can add thousands of extracted responses directly to your conversational Action built with Dialogflow, giving you more time for the fun parts – building rich and engaging user experiences.
Try out Knowledge Connectors in this bike shop sample
When users interact with the Google Assistant through text, it's common and natural to make spelling and grammar mistakes. When mistypes occur, Actions may not understand the user's intent, resulting in a poor followup experience. With Dialogflow's Automatic Spelling Correction, Actions built with Dialogflow can automatically correct spelling mistakes, which significantly improves intent and entity matching. Automatic Spelling Correction uses similar technology to what's used in Google Search and other Google products.
Enable Automatic Spelling Correction to improve intent and entity matching
Your Action can now be used as a virtual phone agent with Dialogflow's new Phone Gateway integration. Assign a working phone number to your Action built with Dialogflow, and it can start taking calls immediately. Phone Gateway allows you to easily implement virtual agents without needing to stitch together multiple services required for building phone applications.
Set up Phone Gateway in 3 easy steps
Dialogflow's Knowledge Connectors, Automatic Spelling Correction, and Phone Gateway are free for Standard Edition agents up to certain limits; for enterprise needs, see here for more options.
We look forward to the Actions you'll build with these new Dialogflow features. Give the features a try with the Cloud Next FAQ Action we made:
And if you're new to developing for the Google Assistant, join our Cloud Next talk this Thursday at 9am – see you on the livestream or in person!
Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud
Google Cloud Next '18 is only a few days away, and this year, there are over 500 sessions covering all aspects of cloud computing, from G Suite to the Google Cloud Platform. This is your chance to learn first-hand how to build custom solutions in G Suite alongside other developers from Independent Software Vendors (ISVs), systems integrators (SIs), and industry enterprises.
G Suite's intelligent productivity apps are secure, smart, and simple to use, so why not integrate your apps with them? If you're planning to attend the event and are wondering which sessions you should check out, here are some sessions to consider:
I look forward to meeting you in person at Next '18. In the meantime, check out the entire session schedule to find out everything it has to offer. Don't forget to swing by our "Meet the Experts" office hours (Tue-Thu), G Suite "Collaboration & Productivity" showcase demos (Tue-Thu), the G Suite Birds-of-a-Feather meetup (Wed), and the Google Apps Script & G Suite Add-ons meetup (just after the BoF on Wed). I'm excited at how we can use "all the tech" to change the world. See you soon!
Google Developers is proud to announce DevFest 2018, the largest annual community event series for the Google Developer Groups (GDG) program. Hundreds of GDG chapters around the world will host their biggest and most exciting developer event of the year. These are often all-day or multi-day events with many speakers and workshops, highlighting a wide range of Google developer products. DevFest season runs from August to November 2018.
Our GDG organizers and communities are getting ready for the season, and are excited to host an event near you!
Whether you are an established developer, new to tech, or just curious about the community - come and check out #DevFest18. Everyone is invited!
For more information on DevFest 2018 and to find an event near you, visit the site.
Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
While most chatbots respond to user requests in a synchronous way, there are scenarios when bots don't perform actions based on an explicit user request, such as for alerts or notifications. In today's DevByte video, I'm going to show you how to send messages asynchronously to rooms or direct messages (DMs) in Hangouts Chat, the team collaboration and communication tool in G Suite.
What comes to mind when you think of a bot in a chat room? Perhaps a user wants the last quarter's European sales numbers, or maybe, they want to look up local weather or the next movie showtime. Assuming there's a bot for whatever the request is, a user will either send a direct message (DM) to that bot or @mention the bot from within a chat room. The bot then fields the request (sent to it by the Hangouts Chat service), performs any necessary magic, and responds back to the user in that "space," the generic nomenclature for a room or DM.
Our previous DevByte video for the Hangouts Chat bot framework shows developers what bots and the framework are all about as well as how to build one of these types of bots, in both Python and JavaScript. However, recognize that these bots are responding synchronously to a user request. This doesn't suffice when users want to be notified when a long-running background job has completed, when a late bus or train will be arriving soon, or when one of their servers has just gone down. Recognize that such alerts can come from a bot but also perhaps a monitoring application. In the latest episode of the G Suite Dev Show, learn how to integrate this functionality in either type of application.
From the video, you can see that alerts and notifications are "out-of-band" messages, meaning they can come in at any time. The Hangouts Chat bot framework provides several ways to send asynchronous messages to a room or DM, generically referred to as a "space." The first is the HTTP-based REST API. The other way is using what are known as "incoming webhooks."
The REST API is used by bots to send messages into a space. Since a bot will never be a human user, a Google service account is required. Once you create a service account for your Hangouts Chat bot in the developers console, you can download its credentials needed to communicate with the API. Below is a short Python sample snippet that uses the API to send a message asynchronously to a space.
from apiclient import discovery from httplib2 import Http from oauth2client.service_account import ServiceAccountCredentials SCOPES = 'https://www.googleapis.com/auth/chat.bot' creds = ServiceAccountCredentials.from_json_keyfile_name( 'svc_acct.json', SCOPES) CHAT = discovery.build('chat', 'v1', http=creds.authorize(Http())) room = 'spaces/<ROOM-or-DM>' message = {'text': 'Hello world!'} CHAT.spaces().messages().create(parent=room, body=message).execute()
The alternative to using the API with service accounts is the concept of incoming webhooks. Webhooks are a quick and easy way to send messages into any room or DM without configuring a full bot, i.e., monitoring apps. Webhooks also allow you to integrate your custom workflows, such as when a new customer is added to the corporate CRM (customer relationship management system), as well as others mentioned above. Below is a Python snippet that uses an incoming webhook to communicate into a space asynchronously.
import requests import json URL = 'https://chat.googleapis.com/...&thread_key=T12345' message = {'text': 'Hello world!'} requests.post(URL, data=json.dumps(message))
Since incoming webhooks are merely endpoints you HTTP POST to, you can even use curl to send a message to a Hangouts Chat space from the command-line:
curl
curl \ -X POST \ -H 'Content-Type: application/json' \ 'https://chat.googleapis.com/...&thread_key=T12345' \ -d '{"text": "Hello!"}'
To get started, take a look at the Hangouts Chat developer documentation, especially the specific pages linked to above. We hope this video helps you take your bot development skills to the next level by showing you how to send messages to the Hangouts Chat service asynchronously.