<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Google Developers - Medium]]></title>
        <description><![CDATA[Engineering and technology articles for developers, written and curated by Googlers. The views expressed are those of the authors and don&#39;t necessarily reflect those of Google. - Medium]]></description>
        <link>https://medium.com/google-developers?source=rss----2e5ce7f173a5---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 20 Nov 2019 11:19:27 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/google-developers" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Building a Simple Web Upload Interface with Google Cloud Run and Cloud Storage]]></title>
            <link>https://medium.com/google-developers/building-a-simple-web-upload-interface-with-google-cloud-run-and-cloud-storage-eba0a97edc7b?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/eba0a97edc7b</guid>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[containers]]></category>
            <category><![CDATA[google-cloud-platform]]></category>
            <category><![CDATA[python]]></category>
            <dc:creator><![CDATA[Google Developers]]></dc:creator>
            <pubDate>Tue, 19 Nov 2019 16:54:31 GMT</pubDate>
            <atom:updated>2019-11-19T16:54:31.159Z</atom:updated>
            <content:encoded><![CDATA[<p>Posted by <a href="https://medium.com/u/e0e2964ac896"><em>Matt Cowger</em></a><em>, Global Head of Startup Architects</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/358/1*hDnIbVFCVsPm5xFLfeldjQ.png" /></figure><h4>Building a GCS Uploader</h4><p>In my time since joining Google, more than one startup has asked about the possibility of receiving files into<a href="https://cloud.google.com/storage/"> Google Cloud Storage</a> (GCS) so that they can be the start of a <a href="https://cloud.google.com/pubsub/docs/overview">Pub/Sub</a> based pipeline for all sorts of use cases — image recognition, data ingestion, etc.</p><p>To me, this seems like a natural fit for the product. However — there’s no available ‘shim’ layer for doing these — there’s no way to treat GCS like a FTP server, or a direct (simple) web upload target. There are work arounds with <a href="https://github.com/GoogleCloudPlatform/gcsfuse">gcsfuse</a> or <a href="https://cloud.google.com/storage/docs/gsutil">gsutil</a>, but those require end users to install and utilize command line products that are Google specific, and even then have the challenge of not being usable without direct Google credentials in the project.</p><p>However, even a naive implementation of such a system would be suboptimal (I suppose thats what its called naive). By placing a single FTP or web upload shim (likely hosted out of a <a href="https://cloud.google.com/compute/docs/regions-zones/">single GCP zone</a>), we negate much of the power and performance of <a href="https://cloud.google.com/products/networking/">Google’s global network</a> and GCS’s distributed nature, as well as potentially limit performance to the network interface upon which our shim runs. Optimally, we’d want something with the following characteristics:</p><ol><li>Uses exclusively standard web technologies and runs on modern browsers. This means supporting HTTP/1.1, HTTP/2, TLS, etc.</li><li>Avoids custom plugins or tools</li><li>Can be used in an anonymous or semi anonymous way — at minimum we need to avoid the use of project credentials on the client side.</li><li>Maximizes the value of Google’s network along with the user’s network, so that we get the best upload performance possible.</li><li>Has the lowest price to maintain — optimally we don’t want to end up with a daemon running somewhere that has a cost much above the cost of GCS object storage in general.</li></ol><p>Using these ideas as my guide, I developed a prototype to do exactly that, using only 2 Google products (<a href="https://cloud.google.com/run/">Cloud Run</a> and Google Cloud Storage).</p><p>Ultimately the solution to the problem was to use the fact the GCS <a href="https://cloud.google.com/storage/docs/access-control/signed-urls">supports signed URLs</a> — URLs that have all relevant (time limited) authentication information built into them. We can use those as a way to avoid the need to deliver standard GCP credentials. By using signed URLs we also avoid issues #1, #2 and #4 — all of the bulk upload traffic from the user to GCS is carried directly to the nearest <a href="https://cloud.google.com/vpc/docs/edge-locations">Google Cloud edge</a> point over HTTP, and not through a single system somewhere.</p><p>The two biggest challenges are to:</p><ol><li>Generate those signed URLs</li><li>Design a frontend UI</li></ol><h4>Generate Signed URLs</h4><p>Generating signed URLs within GCP is a fairly simple process, and supported by a wide number of languages and their associated GCP SDKs. Because Python is superior to all other languages in every way, I chose to use it as my base, but it could be done in nearly any language you prefer.</p><p>After a few setup procedures detailed in the <a href="https://github.com/mcowger/gcs-file-uploader">GitHub repo </a>for this project, I centered on a key function to generate the URLs:</p><pre><a href="http://twitter.com/app">@app</a>.route(&#39;/getSignedURL&#39;)<br>def getSignedURL():<br>    filename = request.args.get(&#39;filename&#39;)<br>    action = request.args.get(&#39;action&#39;)<br>    blob = bucket.blob(filename)<br>    url = blob.generate_signed_url(<br>        expiration=datetime.timedelta(minutes=60),<br>        method=action, version=&quot;v4&quot;)<br>    return url</pre><p>It’s very simple, and very short — consider this just a backend API. After parsing some incoming parameters (note: there are <em>gaping security holes</em> here — this function uses a client generated value with absolutely no sanity checking), we ask the API to sign a URL for this path in the bucket that’s good for 60 minutes.</p><p>It’s worth remembering that the Python itself is just an API — it does not really handle any of client client side code. However, for ease of testing and deployment, I also have the static part of the site (HTML, CSS and JavaScript) served from the same container, but that could be replaced with GCS website serving as a later optimization.</p><p>Running this small Python script is its own interesting thought process: Do we run this on a Compute Engine instance, as a AppEngine instance, as a Cloud Function, in Cloud Run or in Kubernetes (GKE)? All of these would work, but for my case, I wanted as little management as possible, making Cloud Functions and Cloud Run the top contenders. Both support scale-to-zero and per-millisecond billing, meeting requirement #5. For me — I’m most comfortable testing, deploying and managing containers, so I went with Cloud Run.</p><p>The last important component is the client side work, where the user enters the file they wish to upload with a standard form, then the<a href="https://github.com/mcowger/gcs-file-uploader/blob/master/templates/signedurl.html#L71"> request for the signed URL is made</a>:</p><pre>async function generateSignedURL() {<br>      file = getFilename();<br>      action = &quot;PUT&quot;;<br>      const response = await fetch(&#39;/getSignedURL?filename=&#39; + file + &quot;&amp;action=&quot; + action)<br>      if (!response.ok) {<br>        throw new Error(&#39;Network response for fetch was not ok.&#39;);<br>      }<br>      c = await response.text();<br>      c = c.replace(/\&quot;/g, &quot;&quot;)<br>      console.log(&quot;Got signedURL: &quot; + c)<br>      console.log(&quot;Trying to upload &quot; + file)<br>      upload();<br>      console.log(&quot;Complete&quot;)<br>      return false;<br>    }</pre><p>And then lastly the form itself is submitted on button click by the user. This is special because the target of that form is the GCS signed URL <strong>directly </strong>rather than the Python service, meaning we are only limited by client bandwidth for the upload and maximize the performance benefit of Google’s network.</p><p>Once the code has been pushed to Cloud Run (gcloud run deploy), its ready to go (note, I’m skipping the process of building a container — that process is left up to the reader, but the Dockerfile is in the repo)!</p><figure><img alt="Sample upload confirmation" src="https://cdn-images-1.medium.com/max/455/0*FcGqzl7_0aKiFNeA" /></figure><p>You can find the full repo on my GitHub: <a href="https://github.com/mcowger/gcs-file-uploader">https://github.com/mcowger/gcs-file-uploader</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=eba0a97edc7b" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/building-a-simple-web-upload-interface-with-google-cloud-run-and-cloud-storage-eba0a97edc7b">Building a Simple Web Upload Interface with Google Cloud Run and Cloud Storage</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Use video loops with Interactive Canvas]]></title>
            <link>https://medium.com/google-developers/use-video-loops-with-interactive-canvas-dc7503e95c6a?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/dc7503e95c6a</guid>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[google-assistant]]></category>
            <dc:creator><![CDATA[Leon Nicholls]]></dc:creator>
            <pubDate>Tue, 05 Nov 2019 18:01:02 GMT</pubDate>
            <atom:updated>2019-11-05T18:01:01.899Z</atom:updated>
            <content:encoded><![CDATA[<p>Video can be a very effective way to use high-production visuals in your <a href="https://developers.google.com/assistant/interactivecanvas">Interactive Canvas</a> game for the Google Assistant. In a <a href="https://medium.com/google-developers/optimize-your-web-apps-for-interactive-canvas-18f8645f8382">previous post</a>, we discussed using video loops in an Interactive Canvas web app.</p><p>This post discusses the necessary steps to prepare video files and to write the JavaScript logic to play seamless video loops in an Interactive Canvas web app.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*0mM8X3-GAXja5-V8" /><figcaption>Seamless video loop for Interactive Canvas</figcaption></figure><h3>Media Source Extensions</h3><p>Creating seamless video loops requires using <a href="https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API">Media Source Extensions</a> (MSE), which is a browser feature that extends the HTML media element to allow JavaScript to generate media streams for playback.</p><p>You have to write JavaScript code to download and buffer the video data, which is then handed directly to the HTML video tag buffer using MSE. Since MSE is a low-level API, there are a lot of details to get right, but this post will walk through all the necessary steps.</p><h3>Prepare the video</h3><p>First, you have to convert the video to the fragmented MP4 file format for streaming. A normal MP4 file consists of a header and the media data. The header is located at the end of the file. For streaming, the header has to be moved to the beginning of the file.</p><p>You’ll need the following tools to prepare the video:</p><ul><li><a href="http://www.ffmpeg.org/download.html">FFmpeg</a></li><li><a href="http://gpac.wp.mines-telecom.fr/downloads/gpac-nightly-builds/">MP4Box</a></li></ul><p>Use FFmpeg to convert the file to have the correct code:</p><pre>ffmpeg -i video.mp4 -an -codec:v libx264 -profile:v baseline <br>   -level 3 -b:v 2000k videocodec.mp4</pre><p>Run the following command to put the header at the front of the file and to ensure that the fragments start with Random Access Points:</p><pre>MP4Box -dash 1000 -rap -frag-rap videocodec.mp4</pre><p>A new MP4 file is generated with a “<em>_dashinit</em>” postfix in the filename. Upload this file to your web server.</p><h3>Play the video</h3><p>Now that the video file is in the correct format, you’ll use MSE to load and play the file with the HTML media element.</p><p>First, define an HTML video element in your web app:</p><pre>&lt;video id=&#39;vid&#39; /&gt;</pre><p>Get a reference to the video element in JavaScript:</p><pre>let video = document.getElementById(&#39;vid&#39;);</pre><p>Create a <a href="https://developer.mozilla.org/en-US/docs/Web/API/MediaSource"><em>MediaSource</em></a> object and create a virtual URL using <em>URL.createObjectURL</em> with the <em>MediaSource</em> object as the source. Then, assign the virtual URL to the media element’s “<em>src</em>” property:</p><pre>let mediaSource = new MediaSource();</pre><pre>video.src = window.URL.createObjectURL(mediaSource);</pre><p>Wait for the <em>MediaSource</em> “<em>sourceopen</em>” event to tell you that the media source object is ready for a buffer to be added. Create a <a href="https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer"><em>SourceBuffer</em></a> using the <em>MediaSource</em> <em>addSourceBuffer()</em> method with the mime type of the video and then start downloading the file:</p><pre>let sourceBuffer;<br>mediaSource.addEventListener(&#39;sourceopen&#39;, function(){ <br>   sourceBuffer = mediaSource.addSourceBuffer(<br>       &#39;video/mp4; codecs=&quot;avc1.42c01e&quot;&#39;);<br>   fileDownload(&#39;videocodec_dashinit.mp4&#39;);              <br>});</pre><p>Use <em>XMLHttpRequest</em> to download the file as an <em>ArrayBuffer</em>:</p><pre>function fileDownload(url) {<br>  var xhr = new XMLHttpRequest();<br>  xhr.open(&#39;GET&#39;, url, true);<br>  xhr.responseType = &#39;arraybuffer&#39;;<br>  xhr.send();<br>  xhr.onload = function(e) {<br>    if (xhr.status != 200) {<br>      onLoad();<br>      return;<br>    }<br>    onLoad(xhr.response);<br>  };<br>  xhr.onerror = function(e) {<br>    video.src = null;<br>  };<br>};</pre><p>Append the file data to the <em>SourceBuffer</em> with <em>appendBuffer()</em>:</p><pre>let allSegments;<br>function onLoad(arrayBuffer) {<br>  if (!arrayBuffer) {<br>    video.src = null;<br>    return;<br>  }<br>  allSegments = new Uint8Array(arrayBuffer);<br>  sourceBuffer.appendBuffer(allSegments);<br>  processNextSegment();<br>}</pre><p>Call the <em>play()</em>method on the video element and append video segments to the source buffer when there is less than 10 seconds left in the playback pipeline:</p><pre>function processNextSegment() {<br>  // Wait for the source buffer to be updated<br>  if (!sourceBuffer.updating &amp;&amp;   <br>       sourceBuffer.buffered.length &gt; 0) {<br>    // Only push a new fragment if we are not updating and we have<br>    // less than 10 seconds in the pipeline<br>    if (sourceBuffer.buffered.end( <br>          sourceBuffer.buffered.length - 1) -  <br>          video.currentTime &lt; 10) {<br>      // Append the video segments and adjust the timestamp offset <br>      // forward<br>      sourceBuffer.timestampOffset =  <br>          sourceBuffer.buffered.end(<br>              sourceBuffer.buffered.length - 1);<br>      sourceBuffer.appendBuffer(allSegments);<br>    }<br>    // Start playing the video<br>    if (video.paused) {<br>      <strong>video.play();</strong><br>    }<br>  }<br>  setTimeout(processNextSegment, 1000);<br>};</pre><p>The video will keep looping forever and there won’t be any delays between each loop.</p><h3>Next steps</h3><p>Even if you don’t understand all of the technical details of using MSE, just copy and paste the code above into an HTML file and try it out in a browser first. Once that is working as expected, add the necessary JavaScript for an <a href="https://developers.google.com/assistant/interactivecanvas/build/web-app">Interactive Canvas web app</a> and deploy it as an Action.</p><p>It’s also possible to use this technique to create seamless transitions between different videos, but that requires managing a buffer per video.</p><p>That’s all you need to display high-production visuals in your Interactive Canvas game. Have fun!</p><p><em>To share your thoughts or questions, join us on Reddit at </em><a href="https://www.reddit.com/r/GoogleAssistantDev/"><em>/r/GoogleAssistantDev</em></a><em>. Follow </em><a href="https://twitter.com/ActionsOnGoogle"><em>@ActionsOnGoogle</em></a><em> on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dc7503e95c6a" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/use-video-loops-with-interactive-canvas-dc7503e95c6a">Use video loops with Interactive Canvas</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Actions on Google Console Analytics Updates]]></title>
            <link>https://medium.com/google-developers/actions-on-google-console-analytics-updates-1132a593bff9?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/1132a593bff9</guid>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[analytics]]></category>
            <category><![CDATA[google-home]]></category>
            <category><![CDATA[google-assistant]]></category>
            <dc:creator><![CDATA[Mandy Chan]]></dc:creator>
            <pubDate>Mon, 28 Oct 2019 15:44:39 GMT</pubDate>
            <atom:updated>2019-10-28T18:50:07.569Z</atom:updated>
            <content:encoded><![CDATA[<h3>New Analytics updates in Actions on Google Console</h3><p>Have you built an Action for the Google Assistant and wondered how many people are using it? Or how many of your users are returning users? In this blog post, we will dive into 3 improvements that the Actions on Google Console team has made to give you more insight into how your Action is being used.</p><h3>1. Multiple improvements for readability</h3><p>We’ve updated three areas of the Actions Console for readability: Active Users Chart, Date Range Selection, and Filter Options. With these new updates, you can now better customize the data to analyze the usage of your Actions.</p><h3>Active Users Chart</h3><p>The labels at the top of the Active Users chart now read Daily, Weekly and Monthly, instead of the previous 1 Day, 7 Days and 28 Days labels. We also improved the readability of the individual date labels at the bottom of the chart to be more clear. You’ll also notice a quick insight at the bottom of the chart that shows the unique number of users during this time period.</p><p><strong>Before</strong></p><figure><img alt="Active users chart" src="https://cdn-images-1.medium.com/max/901/0*LfHnJc69-CKHsmy6" /></figure><p><strong>After</strong></p><figure><img alt="Active users chart" src="https://cdn-images-1.medium.com/max/1024/0*2pu7e9sRaLsoIzAC" /></figure><h3>Date Range Selection</h3><p>Previously, the date range selectors applied globally to all the charts. These selectors are now local to each chart, allowing you more control over how you view your data.</p><p>The date selector provides the following ranges:</p><ul><li>Daily (last 7 days, last 30 days, last 90 days)</li><li>Weekly (last 4 weeks, last 8 weeks, last 12 weeks, last 24 weeks)</li><li>Monthly (last 3 months, last 6 months, last 12 months)</li></ul><figure><img alt="Date selector on Google Analytics" src="https://cdn-images-1.medium.com/max/600/0*eS2WzpMhZ9ESbcEQ" /></figure><h3>Filter Options</h3><p>Previously when you added a filter, it was applied to all the charts on the page. Now, the filters apply to only the chart you’re viewing. We’ve also enhanced the filtering options available for the ‘Surface’ filter, such as mobile devices, smart speakers, and smart display.</p><p><strong>Before</strong></p><figure><img alt="Filter options on analytics" src="https://cdn-images-1.medium.com/max/600/0*AuJw-1sV9KDLf--6" /></figure><p><strong>After</strong></p><figure><img alt="Filtering options for Google Analytics" src="https://cdn-images-1.medium.com/max/600/0*1dF7zhdUfg6Qy5_U" /></figure><p>The filter feature also lets you show data breakdowns over different dimensions. By default, the chart shows a single consolidated line, a result of all the filters applied. You can now select the ‘Show breakdown by’ option to see how the components of that data contribute to the totals based on the dimension you selected.</p><h3>2. Introducing Retention metrics (New!)</h3><p>A brand new addition to analytics is the introduction of a retention metrics chart to help you understand how well your action is retaining users. This chart shows you how many users you had in a week and how many returned each week for up to 5 weeks. The higher the percentage week after week, the better your retention.</p><p>When you hover over each cell in the chart, you can see the exact number of users who have returned for that week from the previous week.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*ew_pR4CLpV-7s3w9hP9Vmg.gif" /></figure><h3>3. Improvements to Conversation Metrics</h3><p>Finally, we’ve consolidated the conversation metrics and brought them together into a single chart with separate tabs (‘Conversations’, ‘Messages’, ‘Avg Length’ and ‘Abort rate’) for easier comparison and visibility of trends over time. We’ve also updated the chart labels and tooltips for better interpretation.</p><p><strong>Before</strong></p><figure><img alt="Conversation Metrics" src="https://cdn-images-1.medium.com/max/979/0*56Y0Ln23g1EmE1zC" /></figure><p><strong>After</strong></p><figure><img alt="Improvements to Conversation Metrics" src="https://cdn-images-1.medium.com/max/1024/0*-JvUO-JuuMewWG2p" /></figure><h3>Next steps</h3><p>To learn more about what each metric means, you can check out our <a href="https://developers.google.com/actions/console/analytics">documentation</a>.</p><p><em>Try out these new improvements to see how your Actions are performing with your users. You can also check out our </em><a href="https://developers.google.com/assistant/console/analytics"><em>documentation</em></a><em> to learn more. Let us know if you have any feedback or suggestions in terms of metrics that you need to improve your Action. Thanks for reading! To share your thoughts or questions, join us on Reddit at </em><a href="https://www.reddit.com/r/GoogleAssistantDev/"><em>r/GoogleAssistantDev</em></a><em>.</em></p><p><em>Follow </em><a href="https://twitter.com/ActionsOnGoogle"><em>@ActionsOnGoogle</em></a><em> on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1132a593bff9" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/actions-on-google-console-analytics-updates-1132a593bff9">Actions on Google Console Analytics Updates</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Smart Home Cloud Services with Google: Part 2]]></title>
            <link>https://medium.com/google-developers/smart-home-cloud-services-with-google-part-2-3901ab39c39c?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/3901ab39c39c</guid>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[google-assistant]]></category>
            <category><![CDATA[firebase]]></category>
            <category><![CDATA[google-cloud-platform]]></category>
            <category><![CDATA[iot]]></category>
            <dc:creator><![CDATA[Dave Smith]]></dc:creator>
            <pubDate>Fri, 27 Sep 2019 15:31:01 GMT</pubDate>
            <atom:updated>2019-09-27T15:31:01.155Z</atom:updated>
            <content:encoded><![CDATA[<p>In the <a href="https://medium.com/google-developers/building-a-smart-home-cloud-service-with-google-1ee436ac5a03">previous post of this series</a>, we explored using Cloud IoT Core and Firebase to build a device cloud for smart home devices. We saw how Cloud IoT Core enables us to securely connect constrained devices to Google Cloud, while Firebase constructs a user framework around our device data. As a quick review, here is the cloud service architecture we discussed last time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*XUxdYux1b1GRPxyd" /><figcaption><em>Figure 1: Cloud service architecture</em></figcaption></figure><p>Now, let’s look at extending this cloud service to integrate with the Google Assistant through <a href="https://developers.google.com/actions/smarthome/">smart home Actions</a>. This enables users to link their account through the Google Home app and control their devices through any Assistant-enabled surface.</p><blockquote>If you are unfamiliar with Actions on Google or smart home Actions for the Google Assistant. I recommend reading <strong>IoT &amp; Google Assistant</strong> <a href="https://medium.com/google-developers/iot-google-assistant-f0908f354681">part 1</a> and <a href="https://medium.com/google-developers/jdanielmyers-smart-home-eac8f87fd56e">part 2</a> by my colleague, Dan Myers, as a starting point.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*DF4m1mpZkf3VO0PXqD5z4w.png" /><figcaption><em>Figure 2: Example devices registered in Home App</em></figcaption></figure><p>As a developer, this brings the power of the Home Graph to your devices and gives them context within the user’s home. This context is what enables users to make natural requests, like “What is the temperature in the hallway?”, instead of referring to the device by name.</p><p>To build a smart home Action, create a new project in the <a href="https://console.actions.google.com/">Actions console</a>. We will add two new features to our device cloud service and configure them in our console project: account linking and intent fulfillment. Let’s start by taking a look at how to integrate with the account linking process.</p><blockquote>You can find the sample code described in this post on <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud">GitHub</a>.</blockquote><h3>Account linking</h3><p>Users authorize the Google Assistant to access their devices through account linking. This process enables the user to sign in to the account they use with the device cloud and connect the device managed by that account to Google. The Actions on Google platform supports several different account linking flows, but only the <a href="https://developers.google.com/actions/identity/oauth2?oauth=code">OAuth 2.0 Authorization Code</a> flow is supported for smart home Actions.</p><p>To configure OAuth account linking, you need to supply two endpoints in the Actions console: one for <strong>authorization </strong>and the other for <strong>token </strong>exchange. The authorization endpoint is a web UI where the user can authenticate and agree to link their account with the Google Assistant. It must return an authorization code that uniquely identifies the user.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*eeQlrmSjUQ9M46cvYAc__Q.png" /><figcaption><em>Figure 3: Example account linking UI</em></figcaption></figure><p>Since we are using Firebase Authentication for the client apps, we can add a new Angular route to our web client for the user to sign-in and return their <a href="https://firebase.google.com/docs/auth/admin/verify-id-tokens">Firebase ID token</a> as the authorization code once they authorize access:</p><pre>export class LinkAccountComponent implements OnInit {<br>  redirectUri: string;<br>  state: string;<br>  idToken: string;<br>  constructor(private authService: AngularFireAuth,<br>    private route: ActivatedRoute) { }</pre><pre>ngOnInit() {<br><strong>    this.authService.idToken.subscribe((token) =&gt; {<br>      this.idToken = token;<br></strong>    <strong>}</strong>);</pre><pre>    this.route.queryParamMap.subscribe((params) =&gt; {<br>      this.redirectUri = params.get(&#39;redirect_uri&#39;);<br>      this.state = params.get(&#39;state&#39;);<br>    });<br>  }</pre><pre>linkAccount() {<br><strong>    const next = new URL(this.redirectUri);<br>    next.searchParams.append(&#39;code&#39;, this.idToken);<br>    next.searchParams.append(&#39;state&#39;, this.state);<br>    window.location.href = next.toString();</strong><br>  }<br>}</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/web/src/app/link.component.ts">web/src/app/link.component.ts</a></blockquote><p>Once the authorization flow is complete, the Google Assistant calls your token endpoint to exchange the authorization code for a persistent refresh token. This token does not expire and remains valid unless the user chooses to revoke device access or unlink their account.</p><p>Using the Firebase Admin SDK, we can validate and decode the ID token to obtain the UID of the user. If the token is valid, we can generate a refresh token and associate it with the user’s UID in Firestore. This enables us to look up the token again later for validating future requests.</p><pre>async function handleAuthorizationCode(request, response) {<br><strong>  // Auth code is a Firebase ID token<br>  const decodedToken = await auth.verifyIdToken(request.body.code);</strong><br>  // Verify UID exists in our database<br>  const result = await auth.getUser(decodedToken.uid);</pre><pre>  // Encode the user info as a JWT<br>  const refresh = jwt.sign({<br>    sub: result.uid,<br>    aud: client_id<br>  }, secret);<br><br><strong>  // Register this refresh token for the given user<br>  const userRef = firestore.doc(`users/${result.uid}`);<br>  await userRef.set({ &#39;refresh_token&#39;: refresh });</strong></pre><pre>  ...<br>}</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/token.js">functions/smart-home/token.js</a></blockquote><p>Firebase Authentication is an identity provider, but not a complete OAuth solution. This means our device cloud service must augment Firebase by minting and verifying tokens used for access to user data. The example code uses the <a href="https://jwt.io/">JWT</a> standard to create a self-encoded token with the following JSON payload:</p><pre>{<br>  &quot;sub&quot;: uid,<br>  &quot;aud&quot;: client_id<br>  &quot;iat&quot;: issued_at_time<br>}</pre><blockquote>We are using the JWT.io <a href="https://jwt.io/#libraries-io">library</a> for Node.js for all operations related to generating and validating tokens in this example.</blockquote><p>The Google Assistant uses this refresh token to request an access token that will authenticate requests for device data. Our example service validates the refresh token signature and checks to make sure it’s the refresh token we expect for that user.</p><pre>async function handleRefreshToken(request, response) {<br>  const refreshToken = request.body.refresh_token;<br>  // Verify UID exists in our database<br>  <strong>const decodedToken = jwt.verify(refreshToken, secret);</strong><br>  const result = await auth.getUser(decodedToken.sub);</pre><pre>  // Verify incoming token matches our stored refresh token<br>  const userRef = firestore.doc(`users/${result.uid}`);<br>  const user = await userRef.get();<br>  const validToken = user.data().refresh_token;<br>  if (validToken !== refreshToken) throw new Error(...);</pre><pre>  <strong>// Obtain a new access token<br>  const access = jwt.sign({<br>    sub: result.uid,<br>    aud: client_id<br>  }, secret, {<br>      expiresIn: &#39;1h&#39;<br>    });</strong></pre><pre>  ...<br>}</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/token.js">functions/smart-home/token.js</a></blockquote><p>OAuth access tokens should expire, which requires the Assistant service to periodically request a new one using the persistent refresh token. This enables the user to revoke their authorization if necessary. The example access tokens contain the same self-encoded payload as the refresh token, but they expire after one hour.</p><h3>Intent fulfillment</h3><p>With the authorization and token endpoints in place, we are ready to begin implementing the fulfillment logic for the user’s devices. In this post, we will focus on implementing each intent in the context of our device cloud example, but you can find additional details on these intents and how they work together in the <a href="https://developers.google.com/actions/smarthome/concepts/intents">documentation</a>.</p><p>We can use the <a href="https://actions-on-google.github.io/actions-on-google-nodejs/">Actions on Google Client Library</a> for Node.js, which handles parsing the fulfillment requests and provides individual callbacks to handle each intent.</p><pre><strong>const { smarthome } = require(&#39;actions-on-google&#39;);<br>const fulfillment = smarthome();</strong></pre><pre>/** SYNC Intent Handler */<br>fulfillment.onSync(async (body, headers) =&gt; {<br>  ...<br>});</pre><pre>/** QUERY Intent Handler */<br>fulfillment.onQuery(async (body, headers) =&gt; {<br>  ...<br>});</pre><pre>/** EXECUTE Intent Handler */<br>fulfillment.onExecute(async (body, headers) =&gt; {<br>  ...<br>});</pre><pre>/** DISCONNECT Intent Handler */<br>fulfillment.onDisconnect(async (body, headers) =&gt; {<br>  ...<br>});</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/fulfillment.js">functions/smart-home/fulfillment.js</a></blockquote><p>At the beginning of each handler, we need to validate the access token provided with the request. Since the access tokens our application provides are formatted as a JWT that has the user’s UID encoded inside, we simply need to verify the JWT signature using our application’s secret to ensure the token came from us, and check that it has not expired. All of this is handled automatically by the verify() method of the JWT.io <a href="https://jwt.io/#libraries-io">client library</a> for Node.js.</p><pre><strong>const jwt = require(&#39;jsonwebtoken&#39;);</strong></pre><pre>/**<br> * Verify the request credentials provided by the caller.<br> * If successful, return UID encoded in the token.<br> */<br>function validateCredentials(headers, jwt_secret) {<br>  if (!headers.authorization ||<br>      !headers.authorization.startsWith(&#39;Bearer &#39;)) {<br>    throw new Error(&#39;Request missing valid authorization&#39;);<br>  }</pre><pre>  var token = headers.authorization.split(&#39;Bearer &#39;)[1];<br>  <strong>var decoded = jwt.verify(token, jwt_secret);</strong></pre><pre>  return decoded.sub;<br>}</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/fulfillment.js">functions/smart-home/fulfillment.js</a></blockquote><p>If the provided token is valid, the method will return the UID, which we will need in the intent handlers to query the proper device data. Let’s examine how our device cloud can interact with each intent: <strong>SYNC</strong>, <strong>QUERY</strong>, <strong>EXECUTE</strong>, and <strong>DISCONNECT</strong>.</p><h3>SYNC</h3><p>The Google Assistant sends a <a href="https://developers.google.com/actions/smarthome/create#actiondevicessync"><strong>SYNC</strong></a> intent after account linking succeeds to request the list of available devices. The response tells the Google Assistant which devices are owned by the given user and their capabilities (also known as <a href="https://developers.google.com/actions/smarthome/traits/">traits</a>) of each device. This includes an identifier to represent the user (<strong>agentUserId</strong>) and a unique id for each device.</p><p>For the device cloud sample project, this means returning the list of metadata for all devices where the user’s UID is set as the owner.</p><pre>fulfillment.onSync(async (body, headers) =&gt; {<br>  const userId = validateCredentials(headers);<br>  <strong>// Return all devices registered to the requested user<br>  const result = await firestore.collection(&#39;devices&#39;)<br>    .where(&#39;owner&#39;, &#39;==&#39;, userId).get();</strong><br>  const deviceList = [];<br>  result.forEach(doc =&gt; {<br>    const device = new Device(doc.id, doc.data());<br>    deviceList.push(device.metadata);<br>  });</pre><pre>  return {<br>    requestId: body.requestId,<br>    payload: {<br>      agentUserId: userId,<br>      devices: deviceList<br>    }<br>  };<br>});</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/fulfillment.js">functions/smart-home/fulfillment.js</a></blockquote><p>The <strong>SYNC</strong> response only contains the device types and their capabilities; it does not report any device state. Below is an example device entry in the <strong>SYNC</strong> response payload for a <a href="https://developers.google.com/actions/smarthome/guides/light">light bulb</a> and <a href="https://developers.google.com/actions/smarthome/guides/thermostat">thermostat</a>:</p><pre>{<br>  id: &#39;light-123abc&#39;,<br>  type: &#39;action.devices.types.LIGHT&#39;,<br>  traits: [<br>    &#39;action.devices.traits.OnOff&#39;,<br>    &#39;action.devices.traits.Brightness&#39;<br>  ],<br>  name: {<br>    name: &#39;Kitchen Light&#39;<br>  },<br>  willReportState: true<br>},<br>{<br>  id: &#39;thermostat-123abc&#39;,<br>  type: &#39;action.devices.types.THERMOSTAT&#39;,<br>  traits: [<br>    &#39;action.devices.traits.TemperatureSetting&#39;<br>  ],<br>  attributes: {<br>    availableThermostatModes: &#39;off,heat,cool&#39;,<br>    thermostatTemperatureUnit: &#39;C&#39;<br>  },<br>  name: {<br>    name: &#39;Hallway Thermostat&#39;<br>  },<br>  willReportState: true<br>}</pre><h3>Request Sync</h3><p>When users add or remove devices associated with their account, you should notify the Google Assistant through the Home Graph API via <a href="https://developers.google.com/actions/smarthome/request-sync">Request Sync</a>. Without this feature in your service, users must unlink and relink their account to see changes or explicitly say “Hey Google, sync my devices”. Calling the request sync API triggers a new <strong>SYNC</strong> intent to allow your service to provide updated device information.</p><p>In our example, we can observe when a device node is added or removed in Firestore, and request a sync in each instance. The HomeGraph API will throw an error if that user has not linked their account, so we also need to verify that a persisted refresh token exists for the user in Firestore (created during account linking).</p><pre>const { smarthome } = require(&#39;actions-on-google&#39;);<br><strong>const homegraph = smarthome({<br>  jwt: require(&#39;./service-account.json&#39;)<br>});</strong></pre><pre>/**<br> * Cloud Function: Request a sync with the Assistant HomeGraph<br> * on device add<br> */<br>functions.firestore.document(&#39;devices/{device}&#39;).onCreate(<br>  async (snapshot, context) =&gt; {<br>    // Obtain the device owner UID<br>    const userId = snapshot.data().owner;<br>    const linked = await verifyAccountLink(userId);<br>    if (linked) {<br>      <strong>await homegraph.requestSync(userId);</strong><br>    }<br>  });</pre><pre>/**<br> * Cloud Function: Request a sync with the Assistant HomeGraph<br> * on device remove<br> */<br>functions.firestore.document(&#39;devices/{device}&#39;).onDelete(<br>  async (snapshot, context) =&gt; {<br>    // Obtain the device owner UID<br>    const userId = snapshot.data().owner;<br>    const linked = await verifyAccountLink(userId);<br>    if (linked) {<br>      <strong>await homegraph.requestSync(userId);</strong><br>    }<br>  });</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/request-sync.js">functions/smart-home/request-sync.js</a></blockquote><h3>QUERY</h3><p>The <a href="https://developers.google.com/actions/smarthome/create#actiondevicesquery"><strong>QUERY</strong></a> intent asks for the current state of a specific set of devices (noted by their ids). A <strong>QUERY</strong> may be sent by the Google Assistant in response to a voice command (e.g. “What is the current temperature in the hallway?”) or to update the UI in the Google Home app.</p><pre>fulfillment.onQuery(async (body, headers) =&gt; {<br>  validateCredentials(headers);<br>  // Return device state for the requested device ids<br>  const deviceSet = {};<br>  for (const target of body.inputs[0].payload.devices) {<br>    <strong>const doc = await firestore.doc(`devices/${target.id}`).get();</strong><br>    const device = new Device(doc.id, doc.data());<br>    deviceSet[device.id] = device.reportState;<br>  }</pre><pre>  return {<br>    requestId: body.requestId,<br>    payload: {<br>      devices: deviceSet<br>    }<br>  };<br>});</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/fulfillment.js">functions/smart-home/fulfillment.js</a></blockquote><p>The device cloud sample project stores this data in the state field for each device using an internal representation of the device attributes. Our <strong>QUERY</strong> handler converts these attributes to match the device state values required by the Assistant for each trait. Our light bulb and thermostat devices declared support for the following traits:</p><ul><li><a href="https://developers.google.com/actions/smarthome/traits/onoff#device-states">OnOff</a></li><li><a href="https://developers.google.com/actions/smarthome/traits/brightness#device-states">Brightness</a></li><li><a href="https://developers.google.com/actions/smarthome/traits/temperaturesetting#device-states">TemperatureSetting</a></li></ul><p>Below is an example of the device entries in the <strong>QUERY</strong> response returning the state for each supported trait:</p><pre>{<br>  &#39;light-123abc&#39;: {<br>    online: true,<br>    on: true,<br>    brightness: 100<br>  },<br>  &#39;thermostat-123abc&#39;: {<br>    online: true,<br>    thermostatMode: &#39;heat&#39;,<br>    thermostatTemperatureSetpoint: &#39;20&#39;,<br>    thermostatTemperatureAmbient: &#39;17&#39;<br>  }<br>}</pre><h3>EXECUTE</h3><p>When the user issues a command (e.g. “Turn on the kitchen light”), your service receives an <a href="https://developers.google.com/actions/smarthome/create#actiondevicesexecute"><strong>EXECUTE</strong></a> intent. This intent provides a distinct set of traits to be updated for a given set of device ids. This allows a single intent to update a group of traits or devices simultaneously.</p><p>Here, we update the contents of the device-configs document for each device, which triggers Cloud IoT Core to publish the configuration change. As we discussed in the previous post, the device will report its new state to Firestore in the devices collection after the change is processed successfully.</p><pre>fulfillment.onExecute(async (body, headers) =&gt; {<br>  validateCredentials(headers);<br>  // Update the device configs for each requested id<br>  const command = body.inputs[0].payload.commands[0];<br>  <br>  // Apply the state update to each device<br>  const update = Device.stateFromExecution(command.execution);<br>  const batch = firestore.batch();<br>  for (const target of command.devices) {<br><strong>    const configRef = firestore.doc(`device-configs/${target.id}`);<br>    batch.update(configRef, update);</strong><br>  }<br>  await batch.commit();</pre><pre>  return {<br>    requestId: body.requestId,<br>    payload: {<br>      commands: {<br>        ids: command.devices.map(device =&gt; device.id),<br><strong>        status: &#39;PENDING&#39;</strong><br>      }<br>    }<br>  };<br>});</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/fulfillment.js">functions/smart-home/fulfillment.js</a></blockquote><p>The <strong>EXECUTE</strong> response must return a status code indicating whether each command was successful in changing the device state. If the cloud service can synchronously verify that the command reached the device and updated it, then it returns a <strong>SUCCESS</strong>. If the device is unreachable, then the service can report <strong>OFFLINE</strong> or <strong>ERROR</strong>.</p><p>Since our commands are written to Firestore through one document and the result is sent asynchronously back through another, we report <strong>PENDING</strong> rather than reporting <strong>SUCCESS</strong>. This indicates that we expect the command to succeed, and we will report the state change when it arrives.</p><h3>Report State</h3><p>Integrate the <a href="https://developers.google.com/actions/smarthome/report-state">Report State</a> API into your service to proactively report changes in device state to the Google Assistant. This is necessary to publish the latest device information to the Home Graph, which enables Google to look up device state without sending additional QUERY intents to your service.</p><p>In our example, we can define a new cloud function that triggers on updates to the devices collection. Recall from the previous post that this is where state updates from Cloud IoT Core are published. The function takes the updated device state and forwards it to the Home Graph API.</p><pre>const { smarthome } = require(&#39;actions-on-google&#39;);<br>const homegraph = smarthome({<br>  jwt: require(&#39;./service-account.json&#39;)<br>});</pre><pre>/**<br> * Cloud Function: Report device state changes to<br> * Assistant HomeGraph<br> */<br><strong>functions.firestore.document(&#39;devices/{device}&#39;).onUpdate(</strong><br>  async (change, context) =&gt; {<br>    const deviceId = context.params.device;<br>    const device = new Device(deviceId, change.after.data());</pre><pre>    // Check if user has linked to Assistant<br>    const linked = await verifyAccountLink(device.owner);<br>    if (linked) {<br>      // Send a state report<br>      const report = {};<br>      report[`${device.id}`] = device.reportState;<br>      <strong>await homegraph.reportState({<br>        requestId: uuid(),<br>        agentUserId: device.owner,<br>        payload: {<br>          devices: {<br>            states: report<br>          }<br>        }<br>      });</strong><br>    }<br>  });</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/report-state.js">functions/smart-home/report-state.js</a></blockquote><blockquote>The format of the states reported for each device is the same as a <strong>QUERY</strong> response.</blockquote><h3>DISCONNECT</h3><p>Your service receives a <a href="https://developers.google.com/actions/smarthome/create#actiondevicesdisconnect"><strong>DISCONNECT</strong></a> intent if the user decides to unlink their account from the Google Assistant. The service should invalidate the credentials used to provide access to this user’s devices.</p><p>For our example, this means clearing out the stored refresh token we generated during the account linking process. This negates any future attempts to gain a new access token until the user links their account again.</p><pre>fulfillment.onDisconnect(async (body, headers) =&gt; {<br>  const userId = validateCredentials(headers);</pre><pre>  // Clear the user&#39;s current refresh token<br><strong>  const userRef = firestore.doc(`users/${userId}`);<br>  await userRef.delete();</strong></pre><pre>  // Return empty body<br>  return {};<br>});</pre><blockquote>Snippet from <a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud/blob/master/firebase/functions/smart-home/fulfillment.js">functions/smart-home/fulfillment.js</a></blockquote><h3>What’s next?</h3><p>Congratulations! Now your user’s devices are accessible through the Google Assistant. Check out the following resources to go deeper and learn more about building smart home Actions for the Google Assistant:</p><ul><li><a href="https://github.com/GoogleCloudPlatform/iot-smart-home-cloud">Smart Home Device Manager sample</a></li><li><a href="https://developers.google.com/actions/smarthome/">Smart Home Actions documentation</a></li><li><a href="https://actions-on-google.github.io/actions-on-google-nodejs/">Actions on Google client library</a></li></ul><p>You can also follow <a href="https://twitter.com/ActionsOnGoogle">@ActionsOnGoogle</a> on Twitter and connect with other smart home developers in our <a href="https://www.reddit.com/r/GoogleAssistantDev">Reddit community</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3901ab39c39c" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/smart-home-cloud-services-with-google-part-2-3901ab39c39c">Smart Home Cloud Services with Google: Part 2</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Google Assistant Smart Home Part 2: API Implementation]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://medium.com/google-developers/jdanielmyers-smart-home-eac8f87fd56e?source=rss----2e5ce7f173a5---4"><img src="https://cdn-images-1.medium.com/max/1600/0*grx0G316hW7eHPMJ" width="1600"></a></p><p class="medium-feed-snippet">Build voice-controlled devices with Google Assistant. Learn the specifics about the Smart Home API for Actions on Google.</p><p class="medium-feed-link"><a href="https://medium.com/google-developers/jdanielmyers-smart-home-eac8f87fd56e?source=rss----2e5ce7f173a5---4">Continue reading on Google Developers »</a></p></div>]]></description>
            <link>https://medium.com/google-developers/jdanielmyers-smart-home-eac8f87fd56e?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/eac8f87fd56e</guid>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[google-assistant]]></category>
            <category><![CDATA[smart-home]]></category>
            <category><![CDATA[iot]]></category>
            <dc:creator><![CDATA[Daniel Myers]]></dc:creator>
            <pubDate>Wed, 25 Sep 2019 19:53:16 GMT</pubDate>
            <atom:updated>2019-09-25T19:53:16.011Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[Put your Action to the test: Tips to improve your action with testing]]></title>
            <link>https://medium.com/google-developers/put-your-action-to-the-test-tips-to-improve-your-action-with-testing-8f5685af22d?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/8f5685af22d</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[dialogflow]]></category>
            <dc:creator><![CDATA[Google Developers]]></dc:creator>
            <pubDate>Tue, 17 Sep 2019 16:29:51 GMT</pubDate>
            <atom:updated>2019-09-17T15:55:06.750Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Posted by Aza Tulepbergenov, Developer Programs Engineer</em></p><p>Testing is an important part of any development process. Without testing, you risk releasing code that results in a frustrating user experience.</p><p>Actions on Google are no different. It’s crucial to test all the pieces of your Action to make sure your users will have success and want to come back to your Action.</p><p>There are three main components that need to be tested:</p><ul><li>Natural language understanding (NLU), which is how we know what the user wants, or their intent</li><li>Intent handling logic, such as your webhook code that implements response building</li><li>Business logic, such as talking to a database or making external API calls</li></ul><p>Testing Actions on Google is a tricky problem because developing an Action spans multiple platforms.</p><p>Without test tools, I’ve often had to look through logs just to find out that an intent name I used in Dialogflow mismatched the name I specified in the webhook implementation. In one instance, when trying to figure out why my Action wasn’t working, I realized that the intent I had specified in Dialogflow (“play super fun cats game”) was different than the name I’d put in my webhook (“super cool cats game”). Another time, I just wanted to test a specific intent handler, and my testing process was to play through the conversation and manually observe behavior in the Actions simulator, since there is no direct way to trigger a specific intent.</p><p>We’ve heard from developers that the testing experience can be better, so we at Developer Relations have been looking into how to make the testing experience better for developers, and recently we added some <a href="https://developers.google.com/actions/testing/best-practices">testing best practices</a> to our documentation. These summarize our collective findings and are written to help you test your Actions.</p><p>The main insight we discovered is that it’s easier to think through your Action if you look at each of the programming layers separately: natural language understanding, intent handling and business logic. This provides a nice mental framework to partition your Action into separate <a href="https://en.wikipedia.org/wiki/System_under_test">Subjects Under Test</a> (SUT). The diagram below illustrates the layered model.</p><figure><img alt="image of actions diagram" src="https://cdn-images-1.medium.com/max/1024/0*3CArAF_qIRNrMZDc" /></figure><p><strong>Testing “Facts about Google”</strong></p><p>At Google, we’ve used this mental framework for a few projects already, and in this post I’ll describe my thinking process I used when testing <a href="https://github.com/actions-on-google/dialogflow-facts-about-google-nodejs">Facts about Google</a>. Despite being an example without significant business logic, “Facts about Google” has non-trivial requirements for NLU: it needs to parse custom entities from user speech.</p><p>These <a href="https://cloud.google.com/dialogflow/docs/entities-overview">custom entities</a> play a crucial role in the Action’s behavior as the Action takes different conversational directions based on those entities. The webhook code has similar requirements and returns complex responses, which are important to how the Action behaves. Those requirements were our base testing requirements.</p><p><strong>Dialogflow</strong></p><p>First, I’ll take a look at how the testing requirements are implemented for Dialogflow. Because the Action uses Dialogflow to implement NLU, the tests can leverage an API to test the requirements in the Dialogflow fulfillment. Consider the following snippet that uses Dialogflow’s <a href="https://cloud.google.com/dialogflow/docs/reference/rest/v2/projects.agent.sessions/detectIntent">detectIntent</a> method:</p><pre>test.serial(‘choose_fact’, <strong>async</strong> <strong>function</strong>(t) {</pre><pre><strong>const</strong> resJson = await dialogflow.detectIntent(</pre><pre>  ‘Tell me about the history of Google’);</pre><pre>expect(resJson.queryResult).to.include.deep.keys(‘parameters’);</pre><pre><em>// check that Dialogflow extracted required entities from the query.</em></pre><pre>expect(resJson.queryResult.parameters).to.deep.equal({</pre><pre>  ‘category’: ‘history’,</pre><pre><em>  // put any other parameters you wish were extracted</em></pre><pre>});</pre><pre>expect(resJson.queryResult.intent.displayName).to.equal(‘choose_fact’);</pre><pre> t.pass();</pre><pre>});</pre><p>This test case asserts that Dialogflow correctly matches a query to an intent and extracts the correct entities (here, entity “category” has value “history”).</p><p>Aside: If you’re curious or want additional context, you can refer to <a href="https://github.com/actions-on-google/dialogflow-facts-about-google-nodejs/blob/master/functions/test/df-test.js">df-test.js </a>for the full source code.</p><p>I found it useful to map each Dialogflow intent to a test handler (in the snippet above, this is test.serial), which includes assertions applicable for that intent. I recommend testing your NLU for the following:</p><ul><li>Setting entities correctly</li><li>Setting contexts correctly</li><li>Matching difficult queries correctly</li></ul><p>“Facts about Google” uses Dialogflow as an implementation of the NLU layer. However, if you’re an Actions SDK developer, you can apply our recommendations to NLU implementation of your choice if it follows similar structured data format.</p><p><strong>Webhook</strong></p><p>Your webhook plays an important role in controlling a good user experience: it is responsible for conversational responses your Action returns and controls the flow of the conversation.</p><p>Facts about Google returns complex responses that include <a href="https://developers.google.com/actions/assistant/responses#suggestion_chips">suggestion chips</a>, <a href="https://developers.google.com/actions/assistant/responses#basic_card">cards</a>, and text responses. From Google’s <a href="https://designguidelines.withgoogle.com/conversation/conversational-components/suggestions.html">conversational design guidelines</a>, it’s known how important it is to incorporate visual responses to better guide the user through conversation. Hence, suggestion chips are an important piece of the Action.</p><p>Additionally, one of the common bugs among Actions on Google developers is a result of programmer’s misusing client library by mixing up conv.ask and conv.close. In the snippet below, we test both of those pieces.</p><p>The snippet below illustrates how we would test for those bugs. In the code, expect(jsonRes.payload.google.richResponse.suggestions).to.have.deep.members checks that suggestion chips are present and expect(jsonRes.payload.google.expectUserResponse).to.be.true checks that your Action doesn’t close the mic in the middle of conversation with the user.</p><pre>test.serial(‘yes-history’, async function(t) {</pre><pre> const jsonRes = await getAppResponse(‘yes-history’);</pre><pre> expect(jsonRes.payload).to.have.deep.keys(‘google’);</pre><pre> expect(jsonRes.payload.google.expectUserResponse).to.be.true;</pre><pre>expect(jsonRes.payload.google.richResponse.items).to.have.lengthOf(3);</pre><pre>expect(jsonRes.payload.google.richResponse.suggestions).to.have</pre><pre>   .deep.members([</pre><pre>      {‘title’: ‘Sure’}, {‘title’: ‘No thanks’},</pre><pre>    ]);</pre><pre> t.pass();</pre><pre>});</pre><figure><img alt="screenshot of actions demo app" src="https://cdn-images-1.medium.com/max/585/0*uCCQ2u3fJvON5Z4I" /></figure><p>The most important piece of the snippet is getAppResponse function that sends a synthetic payload to an instance of your Dialogflow app and receives a response. This response is used as the main SUT. I encourage you to take a look at the detailed implementation in the <a href="https://github.com/actions-on-google/dialogflow-facts-about-google-nodejs/blob/master/functions/test/index-test.js#L31">official repo</a>.</p><p><strong>Integration</strong></p><p>The tests I wrote for our NLU and business logic layers give us a certain confidence that my Action works as expected, because each unit is tested. However, to boost my confidence even more, I decided to implement an integration test that checks how those two units work together</p><p>A good way to come up with a test case is to look at the unit tests done for Dialogflow and webhook, and combine the scenarios. For example, the snippet below combines the test cases we did for Dialogflow and webhook:</p><pre>test.serial(‘tell me about cats’, <strong>async</strong> <strong>function</strong>(t) {</pre><pre><strong> const</strong> jsonRes = await dialogflow.detectIntent(</pre><pre>   ‘Tell me about cats’</pre><pre>);</pre><pre><strong>const</strong> payload = jsonRes.queryResult.webhookPayload;</pre><pre>expect(payload).to.have.deep.keys(‘google’);</pre><pre>expect(payload.google.expectUserResponse).to.be.true;</pre><pre>expect(payload.google.richResponse.items)</pre><pre>  .to.have.lengthOf(3);</pre><pre>expect(payload.google.richResponse.suggestions).to.have</pre><pre>  .deep.members([</pre><pre>    {‘title’: ‘Sure’}, {‘title’: ‘No thanks’},</pre><pre>]);</pre><pre> t.pass();</pre><pre>})</pre><p>To recap, we used the methodology described in the <a href="https://developers.google.com/actions/testing/best-practices">Testing Best Practices</a> page of the Actions on Google documentation to provide test coverage for “Facts About Google”. One of the goals I had when writing this post was to give insight into my thinking process when coming up with test cases — I encourage you to apply similar processes in your development and provide robust coverage for your Actions.</p><p><em>Thanks for reading! To share your thoughts or questions, join us on Reddit at </em><a href="https://www.reddit.com/r/GoogleAssistantDev/"><em>r/GoogleAssistantDev</em></a><em>.</em></p><p><em>Follow </em><a href="https://twitter.com/ActionsOnGoogle"><em>@ActionsOnGoogle</em></a><em> on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8f5685af22d" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/put-your-action-to-the-test-tips-to-improve-your-action-with-testing-8f5685af22d">Put your Action to the test: Tips to improve your action with testing</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Optimize your web apps for Interactive Canvas]]></title>
            <link>https://medium.com/google-developers/optimize-your-web-apps-for-interactive-canvas-18f8645f8382?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/18f8645f8382</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[google-assistant]]></category>
            <category><![CDATA[actions-on-google]]></category>
            <dc:creator><![CDATA[Leon Nicholls]]></dc:creator>
            <pubDate>Wed, 11 Sep 2019 17:01:03 GMT</pubDate>
            <atom:updated>2019-11-06T17:57:43.302Z</atom:updated>
            <content:encoded><![CDATA[<p>Using <a href="https://developers.google.com/actions/interactivecanvas/">Interactive Canvas</a> to create an Action for the Google Assistant combines the best of conversational interfaces with the rich visual capabilities of HTML.</p><p>We’ve been using Interactive Canvas for a while, experimenting with various ideas and working with partners to launch their Actions. Along the way, we’ve learned some lessons about what works well, and we’ll pass these on in this post to help you create a successful Action using Interactive Canvas.</p><p>Note: At this time, Google is only approving Canvas Actions that are gaming experiences.</p><h3>Design</h3><p>Actions using Interactive Canvas are <a href="https://developers.google.com/actions/conversational/overview">conversational Actions</a>. You should start designing your game by thinking about how a voice user interface can be complemented with visuals using Interactive Canvas. Check out our <a href="https://developers.google.com/actions/design/">design guidelines</a> as a starting point for designing your Action conversation.</p><p>For Actions using Interactive Canvas, we recommend that you create storyboards to cover all of the main stages of your game, such as the loading screen, the welcome and tutorial screens, the main gaming screens, and the end screen.</p><p>Since the user can also touch the screen when playing the game, consider how you’ll provide feedback for these touch events. We recommend that you provide an immediate confirmation within the web app and not wait for the callback response for the user input.</p><p>You should also be aware of the various <a href="https://developers.google.com/actions/interactivecanvas/build/web-app#restrictions">restrictions</a> on Interactive Canvas web apps, like no local storage and staying under the 200MB memory limit.</p><p>Once you have determined the visual elements, GUI components, and animation for your Action, select the best technical implementation for the devices that support Interactive Canvas. In the next sections, we recommend various HTML technologies that work well with Interactive Canvas on all the supported devices to create experiences like this demo Action we have been playing with:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*u9WBHRvq1YTE_xZw" /><figcaption>Interactive Canvas Action</figcaption></figure><h3>Loading page</h3><p>We recommend using a loading page to wait for the web page to load completely before showing any of your main animations. This will make a noticeable difference to the performance of the animations and avoid any potential layout issues as the page assets are loading.</p><p>While the loading page is being displayed, JavaScript logic can be used to asynchronously load any assets like image files and textures.</p><p>The simplest way to design a loading page is to have a solid background and display an animated loading indicator. Here, you want to avoid using JavaScript to drive the animation, so using either CSS or an animated GIF is ideal. Take a look at these cool <a href="https://projects.lukehaas.me/css-loaders/">CSS loaders</a> for ideas for your loading page.</p><h3>Responsive design</h3><p>Currently, Interactive Canvas is supported on Google Assistant Smart Displays and on Android mobile devices. These devices have different screen resolutions and can also support both landscape and portrait modes.</p><p>Your web app should use a responsive design to ensure that the layout and text is appropriately sized for each kind of display. A simple way to do this is to use <a href="https://sass-lang.com/">Sass</a> breakpoints:</p><pre><a href="http://twitter.com/mixin">@mixin</a> for-small-display-landscape {<br>    <a href="http://twitter.com/media">@media</a> screen and (min-width: 1024px) and (min-height: 600px) {<br>      <a href="http://twitter.com/content">@content</a>;<br>    }<br>}</pre><pre><a href="http://twitter.com/mixin">@mixin</a> for-medium-display-landscape {<br>    <a href="http://twitter.com/media">@media</a> screen and (min-width: 1280px) and (min-height: 700px) {<br>      <a href="http://twitter.com/content">@content</a>;<br>    }<br>}</pre><p>Also, Actions have a header at the top of the screen that displays the Action name, icon and provides a way for the user to close the action. You shouldn’t place any important content or text behind the header.</p><p>Interactive Canvas provides a <a href="https://developers.google.com/actions/interactivecanvas/reference/interactivecanvas#getheaderheightpx">getHeaderHeightPx()</a> API that asynchronously determines the header height in pixels. Your web app can use this value to dynamically adjust the layout when the web app is loaded:</p><pre>window.onload = () =&gt; {<br>  const callbacks = {<br>    onUpdate(data) {<br>      // update game state based on intent response data<br>    },<br>  }<br>  interactiveCanvas.ready(callbacks)</pre><pre>interactiveCanvas.getHeaderHeightPx().then((height) =&gt; {<br>    // initialize web app layout with header height value<br>  })<br>}</pre><h3>Animation</h3><p>HTML provides a rich set of options to do animation, from manipulating the DOM to CSS animations, HTML canvas, and WebGL.</p><p>In our experience, animation that relies on mostly real-time JavaScript calculations tends to require more optimizations. However, there are other options to consider that provide smooth and high FPS.</p><h3>CSS</h3><p>Rather than relying on JavaScript to manipulate the DOM, consider using <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Animations/Using_CSS_animations">CSS animations</a>. The browser can use the GPU to accelerate these kinds of animations and manage them separately from the main UI thread.</p><p>If you don’t want to use the low-level CSS properties, there are CSS libraries like <a href="https://daneden.github.io/animate.css/">animate.css</a>, which provide CSS classes for a large collection of high-quality animations. To use these animations, first declare your elements with the necessary CSS animation properties:</p><pre>.title {<br>  animation-duration: 3s;<br>  animation-delay: 2s;<br>  animation-iteration-count: infinite;<br>}</pre><p>The animation classes can then be declared statically with the HTML elements or dynamically triggered via JavaScript:</p><pre>const element = document.querySelector(&#39;.title&#39;);<br>element.classList.add(&#39;animated&#39;, &#39;bounceInUp&#39;);<br>element.style.visibility = &#39;visible&#39;;<br>element.addEventListener(&#39;animationend&#39;, () =&gt; {<br>  element.style.visibility = &#39;hidden&#39;;<br>});</pre><p>We recommend that you avoid simultaneous animations and rather use a sequence of animations that are triggered by the end event of the previous animation.</p><p>You can also consider using the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Animations_API/Using_the_Web_Animations_API">Web Animations API</a>, which combines the performance of the CSS optimization with the dynamic capabilities of JavaScript.</p><h3>SVG</h3><p>SVG allows developers to create two-dimensional vector graphics that can be dynamically manipulated using either JavaScript or CSS.</p><p>SVG supports an <a href="https://developer.mozilla.org/en-US/docs/Web/SVG/Element/animate">&lt;animate&gt;</a> tag that can be used to animate an attribute of an element over time. The results on Canvas devices are quite good for basic animations.</p><p>However, you can do even better by using CSS to animate SVG graphical elements. The SVG <a href="https://developer.mozilla.org/en-US/docs/Web/SVG/Element/path">&lt;paths&gt;</a> are assigned classes that are then animated using the <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Animations/Using_CSS_animations">CSS animations</a>. These animations are rendered smoothly on devices.</p><h3>WebGL</h3><p>WebGL uses the device GPU to significantly improve the performance of animations on web pages. If you do not have experience with WebGL, there are many libraries, like <a href="https://www.pixijs.com/">PixiJS</a>, that provide easy, high-level JavaScript APIs for designing animations that are then rendered using WebGL.</p><p>For animators who are familiar with tools like Adobe Animate, the <a href="https://github.com/pixijs/pixi-animate-extension">PixiAnimate</a> extension allows content to be exported that can then be played by the PixiJS runtime library. In the Canvas web app, the <a href="https://github.com/pixijs/pixi-animate">Pixi Animate plugin</a> is used to load the exported animation. JavaScript code can then control the animation and even make dynamic changes to the visuals, like changing the colors of shapes.</p><p>We recommend avoiding “Shape Tween” in Adobe Animate since that generates many textures that can run over the memory limit for Canvas web apps. Instead, use “Classic Tweens”.</p><h3>WebAssembly</h3><p><a href="https://webassembly.org/">WebAssembly</a> (WASM) is an interesting option for developers who have existing code in other languages. For example, you can take your existing game in C or C++ and use a toolchain like <a href="https://emscripten.org/">Emscripten</a> to compile it to WebAssembly. The Canvas web app can load and instantiate the WASM code. JavaScript logic can then call the WASM functions.</p><p>WebAssembly provides developers with low-level control coupled with high-level performance. We’ve experimented with compiling <a href="https://rustwasm.github.io/">Rust to WASM</a> to avoid garbage collection and achieve super-smooth animations at 60FPS.</p><p>Using WASM allows the best of both worlds: you can use HTML for fast text rendering and also displaying static elements, then overlay these on the WASM animations.</p><h3>Video</h3><p>Using video assets in a game can be an effective way to display high-production visuals and create engaging experiences such as <a href="https://en.wikipedia.org/wiki/Cutscene">cutscenes</a>.</p><p>For visuals that might be too taxing to generate “on the fly”, consider pre-rendering the animations as a video and then using the HTML media element to play video in the web app. This can be a very effective way to provide rich backgrounds, which are then combined with dynamic DOM elements and animations.</p><p>For Canvas, only one active media element is allowed and the video element cannot be styled with CSS.</p><p>For long-running effects, the video can be looped by specifying the “loop” attribute for the media element. However, auto looping can introduce a delay between the end of the loop and the beginning of the next loop.</p><p>To make the transition seamless requires using the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API">Media Source Extensions</a> (MSE), which extends the media element to allow JavaScript to generate media streams for playback. MSE allows for segments of media to be handed directly to the HTML5 video tag’s buffer. JavaScript code is needed to download and buffer the video data to ensure that the video will keep looping forever and there won’t be any delays between each loop.</p><p><em>Update: Read the post on how use </em><a href="https://medium.com/google-developers/use-video-loops-with-interactive-canvas-dc7503e95c6a"><em>video loops</em></a><em>.</em></p><h3>State management</h3><p>The web app can use the Interactive Canvas <a href="https://developers.google.com/actions/interactivecanvas/reference/interactivecanvas#sendtextquery">sendTextQuery()</a> API to invoke an intent in the Dialogflow agent and synchronize its state with the fulfillment logic. This invocation follows the same life-cycle as a user’s vocal response and can add some delays to any backend persistence.</p><p>If the timing of persisting any state updates to the backend is time-sensitive, then the web app can invoke any persistence APIs, like <a href="https://firebase.google.com/docs/firestore">Cloud Firestore</a>, directly from its front-end JavaScript logic.</p><h3>Testing</h3><p>We recommend using <a href="https://developers.google.com/web/tools/chrome-devtools/">Chrome DevTools</a> to profile your web app performance and memory usage. When you test your Action in the desktop <a href="https://developers.google.com/actions/tools/simulator">simulator</a>, find the iframe that is used to embed the web app and then use the DevTools to inspect the web app DOM.</p><p>We’ve found DevTools useful for profiling the memory and CPU usage. This allows you to find any bottlenecks in your code and improve the FPS of your animations.</p><p>We also recommend using <a href="https://github.com/mrdoob/stats.js/">stats.js</a> to display an overlay on your game that tracks the real-time FPS.</p><p>Once you are happy with the performance, make sure you test your Action on all the devices supported by Interactive Canvas since the screen ratios vary and the device performances differ.</p><h3>Next steps</h3><p>Depending on the kind of game you are developing with Interactive Canvas, you have a range of choices to create smooth animations and effects. It does require some testing and profiling, but you can achieve delightful, immersive experiences for your Interactive Canvas games.</p><p>It’s still early days for games using Interactive Canvas, so take this opportunity to kick the tires and experiment with what is possible using the power of HTML.</p><p><em>Read our </em><a href="https://medium.com/google-developers/create-rich-immersive-google-assistant-games-with-interactive-canvas-b24ec30d2e31"><em>previous post</em></a><em> on Interactive Canvas.</em></p><p><em>To share your thoughts or questions, join us on Reddit at </em><a href="https://www.reddit.com/r/GoogleAssistantDev/"><em>/r/GoogleAssistantDev</em></a><em>. Follow </em><a href="https://twitter.com/ActionsOnGoogle"><em>@ActionsOnGoogle</em></a><em> on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=18f8645f8382" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/optimize-your-web-apps-for-interactive-canvas-18f8645f8382">Optimize your web apps for Interactive Canvas</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Training the next generation of voice developers through hackathons and MLH Localhost]]></title>
            <link>https://medium.com/google-developers/training-the-next-generation-of-voice-developers-through-hackathons-and-mlh-localhost-65707051b89f?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/65707051b89f</guid>
            <category><![CDATA[voice-assistant]]></category>
            <category><![CDATA[major-league-hacking]]></category>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[hackathons]]></category>
            <dc:creator><![CDATA[Nick Felker]]></dc:creator>
            <pubDate>Wed, 21 Aug 2019 12:06:01 GMT</pubDate>
            <atom:updated>2019-08-21T12:06:01.255Z</atom:updated>
            <content:encoded><![CDATA[<p>During the Fall 2018 and Spring 2019 seasons, Google Cloud and Actions on Google partnered with <a href="https://mlh.io">Major League Hacking</a> (MLH), the official student hackathon league, to provide students with the resources they needed to build innovative projects during the hundreds of hackathons that took place in North America and Europe. Google and MLH worked to develop a Localhost Module that is student-led and student-trained in order for communities to learn and grow together.</p><p>Hackathons are great places to experiment and try new ideas. During my undergraduate years, I attended many hackathons and organized one myself. Everyone gets time to focus on building a prototype, with everything you need to accomplish it. You don’t need to be an expert in a given field, or have access to the highest-end technology, as mentors and MLH are there to help you. It’s a place where you collaborate with others and exchange ideas.</p><p>At the end of hacking, you get to show off your project to everyone else. The hackathon is transformed into a tiny world’s fair, where people show off their accomplishments and you get to see everybody’s creativity.</p><p>With Google Cloud Platform and Actions on Google, you get access to more resources and tools to bring your ideas to life.</p><h3><strong>Recap of the hackathon season</strong></h3><p>To help students get started with Google Cloud Platform, hackathon attendees received <a href="http://cloud.google.com/edu">Cloud Credits</a> that let them use <a href="https://cloud.google.com/products/">the assortment of GCP products and APIs</a> to prototype for free. And what differentiates these credits from the GCP free trial is that they don’t require a credit card, a great offering for student populations who may not have one or may be hesitant to provide one for fear of billing snafus.</p><p>Additionally, a Best Use of Google Cloud Platform prize is awarded at every hackathon, where each person on the winning team received a Google Home Mini.</p><p>The project <a href="https://devpost.com/software/i-know-trash">I Know Trash</a> won the Best Use of Google Cloud Platform prize at DragonHacks, the annual hackathon at Drexel University. It uses the Google Cloud Vision API on a Raspberry Pi to identify the item you’re holding and places it either in the garbage or recycling segments of the trash can. This can allow recycling to be done more efficiently.</p><p>Another project, <a href="https://devpost.com/software/bus-buddy-o4teqr">Bus Buddy</a>, won the prize at Cypher, the annual hackathon at William &amp; Mary college. It is an Action for the Google Assistant that provides transit information for the university’s bussing system. You can ask a question like, “When’s the Purple Line coming?” or “When will the Northline arrive at Barracks?” and get quick, immediate answers.</p><p>We also provided Google Homes for MLH’s <a href="https://hack.mlh.io/hardware/">hardware lab</a>, a service that allows attendees to try out some hardware during the event without having to buy it themselves. This makes these services more accessible to a wider audience and allows projects to be innovative without worrying about costs. Hundreds of Google Homes have been checked out during the past year’s hackathon seasons.</p><p>Many interesting Actions were created to solve novel challenges. One of them, <a href="https://devpost.com/software/doctor-smart-tap">Doctor Smart Tap</a>, allows one to use voice commands to dispense a precise measurement of liquid. This could be useful in lab environments where you may not want to handle liquids directly.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fagglf_cdy9Y%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dagglf_cdy9Y&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fagglf_cdy9Y%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0292db315c3a29ed7b60b0854e60adaf/href">https://medium.com/media/0292db315c3a29ed7b60b0854e60adaf/href</a></iframe><p>To learn more about Doctor Smart Tap, you can watch the demo in the video above.</p><p>Other Actions, like <a href="https://devpost.com/software/sicko-code">SickoCode</a>, were more humorous. It uses machine learning to generate rap lyrics in the style of a specific artist.</p><h3><strong>MLH Localhost</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/654/0*TAcWCoufJjUR1bIn" /></figure><p>In addition, we worked together to create a Localhost Workshop. MLH Localhost is a program that provides workshops to student groups as content bundles that give them everything they need to host the workshop themselves, teaching the rest of the group on a given topic.</p><p>Our workshop, <a href="https://localhost.mlh.io/activities/actions-on-google/"><em>Ok Google, How Do I Build Actions for Assistant</em></a>, walks students through building an Action that is able to perform sentiment analysis on a given topic by reading recent tweets on that subject. The guide introduces them to Dialogflow and the <a href="https://cloud.google.com/natural-language/">Cloud Natural Language API</a> to provide a positivity score.</p><p>This has been one of the more popular workshops, being run in over 25 workshops from student groups around the world, in places like Parul University and the University of British Columbia.</p><h3><strong>Conclusion</strong></h3><p>The Fall 2019 hackathon season begins August 23rd. We can’t wait to see what is built at the next hackathons, and we’ll continue to ensure that you get the resources you need to get started.</p><p><em>Want more? Head over to the </em><a href="https://reddit.com/r/GoogleAssistantDev"><em>Actions on Google Reddit community</em></a><em> to discuss Actions with other developers. You can also join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit when you publish your first Action.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=65707051b89f" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/training-the-next-generation-of-voice-developers-through-hackathons-and-mlh-localhost-65707051b89f">Training the next generation of voice developers through hackathons and MLH Localhost</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bringing Tabletop Audio to Actions on Google through media responses]]></title>
            <link>https://medium.com/google-developers/bringing-tabletop-audio-to-actions-on-google-through-media-responses-a48bbcd9a38?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/a48bbcd9a38</guid>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[tabletop-games]]></category>
            <category><![CDATA[tabletop-rpg]]></category>
            <category><![CDATA[music]]></category>
            <dc:creator><![CDATA[Nick Felker]]></dc:creator>
            <pubDate>Mon, 05 Aug 2019 21:46:39 GMT</pubDate>
            <atom:updated>2019-08-05T21:43:31.102Z</atom:updated>
            <content:encoded><![CDATA[<p>Finding the best music to fit the mood is something that a lot of game masters look for, and a website called <a href="https://tabletopaudio.com/">Tabletop Audio</a> has been meeting this need through high-quality original tracks for a variety of genres and ambiances.</p><p>It was in my second year of college that my friends and I began playing tabletop RPGs. We came up with an elaborate story, and tried to make our games as immersive as possible. The preparation for each session including looking for instrumental music from video games that would fit the mood of each story beat.</p><p>In my most recent meetup with my friends to play another board game, I brought my Google Home Mini to provide ambiance and to avoid awkward silences while we thought about our next move.</p><p>Building a voice experience for Tabletop Audio made a lot of sense, as it’s easy to start music through a voice command, and Tabletop Audio provides a good collection of audio to listen to based on the situation.</p><p>I created an Action for the Google Assistant, and you can now say “Talk to Tabletop Audio” to invoke it and get suggestions. You can then ask questions like “What songs are new”, or start playing a song by saying things like “Play Medieval Fair”. The project has also been set up to support <a href="https://developers.google.com/actions/discovery/explicit#invocation_phrase_optional">invocation phrases</a>, enabling users to start music with a single query, such as “Ask Tabletop Audio to play Medieval Fair”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*R_2ANVMMrvoIfAVl" /></figure><p><em>Smart display result for saying “Ask Tabletop Audio to play Medieval Fair”</em></p><p>This is built primarily with <a href="https://developers.google.com/actions/assistant/responses#media_responses">media responses</a>, which will let you return an audio file of any length along with track metadata. The Assistant will begin playing this file along with built-in controls on-screen and with your voice. You can pause by tapping on the Google Home Mini, pressing the pause icon, or saying “Hey Google, pause” at any point for the audio to pause. The developer doesn’t have to do any media player logic.</p><h3>Design</h3><p>Before writing code, I thought about the different types of interactions that I would want a user to have with this Action, in order to design the conversation appropriately. I started by creating a series of sample dialogs. After I did this, I was able to begin creating intents with <a href="https://dialogflow.com">Dialogflow</a>.</p><p><strong>USER</strong></p><p>Hey Google, talk to Tabletop Audio.</p><p><strong>GOOGLE ASSISTANT</strong></p><p>Sure, here’s Tabletop Audio.</p><p><strong>TABLETOP AUDIO</strong></p><p>Welcome! I can play a specific track, like ‘Wizard’s Lab, or a genre, like ‘fantasy.’ What would you like?</p><p><strong>USER</strong></p><p>(Hmm, the party is currently camping.)</p><p>Play something with nature.</p><p><strong>TABLETOP AUDIO</strong></p><p>I found several tracks for the category nature. Here is one at random: Secret Garden.</p><p>(Track begins to play)</p><p>(A few minutes later)</p><p><strong>USER</strong></p><p>Hey Google, what song is playing again?</p><p><strong>TABLETOP AUDIO</strong></p><p>(Music pauses)</p><p>This is Secret Garden from Tabletop Audio.</p><p>(Music continues)</p><p>— -</p><p><strong>USER</strong></p><p>Hey Google, talk to Tabletop Audio.</p><p><strong>GOOGLE ASSISTANT</strong></p><p>Sure, here’s Tabletop Audio.</p><p><strong>TABLETOP AUDIO</strong></p><p>Welcome! I can play a specific track, like ‘Wizard’s Lab’, or a genre, like ‘fantasy.’ What would you like?</p><p><strong>USER</strong></p><p>What tracks are new?</p><p><strong>TABLETOP AUDIO</strong></p><p>The last three tracks added to Tabletop Audio are: Wizard’s Lab, Robotics Lab, and Castle Jail. What do you want to listen to?</p><p><strong>USER</strong></p><p>Castle Jail</p><p><strong>TABLETOP AUDIO</strong></p><p>Here is Castle Jail</p><p>(Music starts playing)</p><p>— -</p><p><strong>USER</strong></p><p>Hey Google, talk to Tabletop Audio.</p><p><strong>GOOGLE ASSISTANT</strong></p><p>Sure, here’s Tabletop Audio.</p><p><strong>TABLETOP AUDIO</strong></p><p>Welcome! I can play a specific track, like ‘Wizard’s Lab’, or a genre, like ‘fantasy.’ What would you like?</p><p><strong>USER</strong></p><p>(Distracted)</p><p>No I don’t know where the car keys are</p><p><strong>TABLETOP AUDIO</strong></p><p>I can’t find a track with that description, but I found others like Cave of Time. You can also ask “What are the latest tracks?”. What would you like to listen to?</p><p><strong>USER</strong></p><p>Oh, um, play Cave of Time I guess</p><p><strong>TABLETOP AUDIO</strong></p><p>Here is Cave of Time</p><p>(Music starts playing)</p><h3>Media Responses</h3><p>First, I needed access to the API for Tabletop Audio. I reached out to the owner with this idea, and he was happy to provide me with a JSON URL that contains an array of every track’s metadata. Once I had access to the API, I was able to begin making queries to get each track’s audio URL and related metadata such as the track name, genre, and image. To prevent a given user from making multiple queries to this API each time they use it, I decided to cache the data when the user first invokes the Action using session data. I can do this using the <a href="https://www.npmjs.com/package/actions-on-google">Node.js Actions on Google library</a> by with the <a href="https://developers.google.com/actions/assistant/save-data#save_data_between_turns_of_a_conversation">conv.data object</a>. This data will only be available until the Action ends.</p><p>As the function was written in TypeScript, I created several interfaces to formally define the data structures I expected. This allows me to take advantage of type-safety through the compiler, and autocompletion in my IDE.</p><p>I define the API response in TabletopAudioResponse, which has an array of TabletopAudioTrack objects. I store the API response and the currently playing track in my session data, which is represented by the TabletopAudioSession interface. Then I extend the default DialogflowConversation interface with these types. You can see the implementation in the code snippet below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/347074cfa2fa1864735e0dd6b740f1fd/href">https://medium.com/media/347074cfa2fa1864735e0dd6b740f1fd/href</a></iframe><p>To make sure I only call this API once per user per session, I created a function to fetch this data and save it in session data. You can see the implementation of the function below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0e79dea987d2cad943a42368000f32a6/href">https://medium.com/media/0e79dea987d2cad943a42368000f32a6/href</a></iframe><p>When I first prototyped this Action, I only called this function in my welcome intent. After further testing, I found that my Action would not work when I used an invocation phrase because the Action would skip the welcome intent and try to run the playback intent using data that was never fetched.</p><p>To make sure that this function would always run the first time that the Action started, I created a <a href="https://actions-on-google.github.io/actions-on-google-nodejs/interfaces/dialogflow.dialogflowapp.html#middleware">middleware function</a>. You can see an implementation in the snippet below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7d66044844781ba64e6b310166f551c0/href">https://medium.com/media/7d66044844781ba64e6b310166f551c0/href</a></iframe><p>Now that I had a list of tracks, I was able to create a media response. I abstracted the logic to a separate function, as shown in the snippet below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5efd116ca7df07ae0f1c9f34b44fbeaf/href">https://medium.com/media/5efd116ca7df07ae0f1c9f34b44fbeaf/href</a></iframe><p>My playback intent could then start playing this track:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e86c953aa0830156403f9b7b17c92783/href">https://medium.com/media/e86c953aa0830156403f9b7b17c92783/href</a></iframe><p>Now that I had a prototype for playing back audio, it was time to implement a search capability.</p><h3>Search</h3><p>Each track has a title, a list of genres, and a list of tags. A user may want to find a track based on any of these parameters, and I wanted to make sure I could capture any of these values and scan the tracklist.</p><p>Using Dialogflow, I was able to create a set of training phrases to represent all of the possible ways that a user could search for a given track. I highlighted each searchable term within the phrase and marked it as a parameter with the type @sys.any. This entity type can capture any text, but does mean you should add more training phrases to ensure that Dialogflow can identify what is part of the search.</p><p>To make sure that this intent did not trigger too frequently, I changed its priority to <em>Low</em> by selecting the dot in the top-left corner of the page.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/237/0*prz9R4A-gx9GVHNb" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/811/0*cTeTkSKa4IouHxgL" /></figure><p>By passing the parameter search to my fulfillment, I added a simple search function. If a search query matched a track title, it would begin playing that track. Otherwise, if the user searched for a genre or tag, it would return a random result. I also added some sanitizing of track titles so that users didn’t have to worry about letter case or other symbols. You can see the implementation in the snippet below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b85493a2d311d87b7bd979d3504ad792/href">https://medium.com/media/b85493a2d311d87b7bd979d3504ad792/href</a></iframe><p>Each song is ten minutes. At the end, the Action receives a callback and it asks a question to the user for the next track to play. Since I know what track is currently playing, I am able to implement a repeat intent which will start playing it from the start. You can see the implementation in the snippet below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/bfc642614e9a5ed7dc4d8d5834ff37f2/href">https://medium.com/media/bfc642614e9a5ed7dc4d8d5834ff37f2/href</a></iframe><p>As I began testing this, I saw that some users were not sure whether the Action was still active when they gave a follow-up query after music began to play.</p><p>Examining the <a href="https://dialogflow.com/docs/agents/history">history of interactions</a> in the Dialogflow console, I saw several instances where someone would say “Ask Tabletop Audio for Outpost 31”. This would then look for a track literally called “Ask Tabletop Audio for Outpost 31” and then tell the user that nothing was found.</p><p>With this usage data, I was able to return to the list of training phrases and add several additional phrases to better specify how to extract the search query.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/722/0*zp6uzUoC4PNO9CnD" /></figure><h3>Polished Voice Design</h3><p>Now that the playback and search were working well, it was time to add extra refinements to make the voice user interface better.</p><p>First, I decided to add two additional intents to help guide the user with answers to common questions. One intent would answer about what songs are new, and the other would answer about what this Action could do.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dd5048dd0d145eac919c51ebf68b3ec0/href">https://medium.com/media/dd5048dd0d145eac919c51ebf68b3ec0/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/531/0*cwZhs8YOFvu7uuyi" /></figure><p>I also took the time to differentiate the visual response from the audible response. On visual surfaces like phones, it’s easy to provide additional information through on-screen suggestion chips or cards. With an audio-only surface like a speaker, I provided a more verbose answer.</p><p>To do this, I replaced many of the strings in my conv.ask with a SimpleResponse object that has both a speech and text properties. My Action will speak aloud the speech, while showing the text on the screen. You can see an example of my updated welcome intent below:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dd5048dd0d145eac919c51ebf68b3ec0/href">https://medium.com/media/dd5048dd0d145eac919c51ebf68b3ec0/href</a></iframe><h3>Conclusion</h3><p>The <a href="https://assistant.google.com/services/a/uid/000000eec202808f">Tabletop Audio Action</a> is a real-world example of how you can build great audio experiences using media responses. With a few small design considerations, you can create a high-quality voice experience that also runs well on phones, smart displays, and many other Assistant surfaces.</p><p>To learn more about the implementation of this Action, you can view the source code on <a href="https://github.com/google/tabletopaudio-action">GitHub</a>. To learn more about building conversational Actions for the Google Assistant, you can start reading <a href="https://developers.google.com/actions/assistant/">our developer documentation</a>. To learn more about designing high-quality conversational Actions, you can start reading <a href="https://designguidelines.withgoogle.com/conversation/">our conversational design best practices documentation</a>.</p><p><em>Want more? Head over to the </em><a href="https://reddit.com/r/googleassistantdev"><em>Actions on Google community on Reddit</em></a><em> to discuss Actions with other developers and share what you’ve built on Twitter with the hashtag #AoGDevs. Join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit and an Assistant t-shirt when you publish your first app.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a48bbcd9a38" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/bringing-tabletop-audio-to-actions-on-google-through-media-responses-a48bbcd9a38">Bringing Tabletop Audio to Actions on Google through media responses</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Create rich, immersive Google Assistant Games with Interactive Canvas]]></title>
            <link>https://medium.com/google-developers/create-rich-immersive-google-assistant-games-with-interactive-canvas-b24ec30d2e31?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/b24ec30d2e31</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[google-assistant]]></category>
            <dc:creator><![CDATA[Leon Nicholls]]></dc:creator>
            <pubDate>Thu, 01 Aug 2019 17:01:03 GMT</pubDate>
            <atom:updated>2019-11-11T17:12:23.036Z</atom:updated>
            <content:encoded><![CDATA[<p>At I/O this year, we announced <a href="https://developers.google.com/actions/interactivecanvas/">Interactive Canvas</a>, a new way to build immersive, full-screen experiences that combine the power of voice, visuals, and touch on Smart Displays and Android phones. Starting today, you can build and deploy your Interactive Canvas Action to users. Interactive Canvas is currently only available for games, but we will consider other verticals in the future.</p><p>With Interactive Canvas, you can create Conversational Actions that have rich, full-screen visuals and media using existing web technologies: HTML, JavaScript, CSS, and WebAssembly. This means you don’t need to learn new languages and tools; instead, you can use your favorite web development tools, libraries, and frameworks.</p><h3>Familiar lifecycle</h3><p>The typical lifecycle of an Action starts with the user invoking the Action. The Action starts and prompts the user for a response through voice and visuals. This back and forth conversation between the Action and the user continues until the user exits the Action.</p><p>The lifecycle of an Interactive Canvas Action is very similar to that of a <a href="https://developers.google.com/actions/conversational/overview">Conversational Action</a>. The main difference is that an Action can now return an HTML response that loads a web app on devices with displays.</p><figure><img alt="The lifecycle of an Interactive Canvas Action." src="https://cdn-images-1.medium.com/max/1024/0*Yyu982FXwLdl6tyF" /></figure><p>When a user talks to a device with a screen, the Dialogflow NLP matches an intent and then provides an HTML response that loads the web app. The web app initializes the Interactive Canvas API and can update its GUI to match the intent response data.</p><p>You now have a <a href="https://developers.google.com/actions/interactivecanvas/reference/interactivecanvas#sendtextquery">new API</a> for the web app to invoke an intent. The flow for that intent is the same as for voice input. This API is useful for triggering an intent in the web app logic to prompt the user for input. Typically, this is used for supporting custom interactive controls on the web app GUI, but could also be called at any time when the Action needs user input.</p><p>It’s very easy to add support for the Interactive Canvas API to HTML. All it takes is including our Canvas JavaScript library and declaring a callback for handling the matched intents:</p><pre>&lt;html&gt;<br>  &lt;head&gt; <br>    &lt;script src=&quot;<a href="https://www.gstatic.com/assistant/interactivecanvas/api/interactive_canvas.min.js">https://www.gstatic.com/assistant/interactivecanvas/api/interactive_canvas.min.js</a>&quot;&gt;&lt;/script&gt;<br>  &lt;/head&gt;<br>  &lt;body&gt;<br>    &lt;script&gt;<br>      const callbacks = {<br>        onUpdate(data) {<br>          // update game state based on intent response data<br>        },<br>      };<br>      interactiveCanvas.ready(callbacks);<br>    &lt;/script&gt;<br>  &lt;/body&gt;<br>&lt;/html&gt;</pre><p>This way, the state and other data of your game can be synchronized between your backend logic and the web app graphics and animation.</p><h3>Rich visuals</h3><p>Interactive Canvas gives you pixel-level control over the rendering of your UI. You’re not forced to use any templates or UI components that we provide, so you can design a visual experience that complements your brand identity by using your own web fonts, background images, color palette, and icons.</p><figure><img alt="Rich, immersive experiences are supported by Interactive Canvas." src="https://cdn-images-1.medium.com/max/600/0*usfqDp8bZ1bUAgOx" /></figure><p>You can add visual payoff to conversations by using custom layouts, transitions, and animations. HTML gives you a variety of ways to do animations, including manipulating the DOM or using JavaScript to dynamically control HTML canvas graphics or vector graphics with SVG.</p><p>We’ve optimized WebGL to allow for smooth 2D and 3D experiences. In our tests, we have achieved a steady 60 FPS while animating hundreds of sprites.</p><p>You can adapt these rich visuals to different devices and screen resolutions by using standard <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/Media_Queries/Using_media_queries">CSS media queries</a> or using your favorite library that supports responsive web design. You can also use the standard HTML <a href="https://developer.mozilla.org/en-US/docs/Web/API/Window/orientationchange_event">orientationchange</a> event to detect the device orientation.</p><h3>Custom GUIs</h3><p>You can now create your own custom buttons, tables, and lists. Use one of the many free libraries and frameworks that provide rich interactive components such as <a href="https://github.com/material-components/material-components-web-components">Material Web Components</a>, <a href="https://github.com/Polymer/polymer">Polymer elements</a>, <a href="https://jqueryui.com">JQuery UI</a>, or <a href="https://reactjs.org/">React</a>.</p><figure><img alt="Custom GUI components and layouts are possible using HTML." src="https://cdn-images-1.medium.com/max/600/0*Cy8LLy9uPiaIyfom" /></figure><p>You can also use Standard HTML <a href="https://developer.mozilla.org/en-US/docs/Web/API/Touch_events">touch events</a> as an alternative to voice to allow the user to interact with your web app and to select options. For example, you can create your own slideshows and scrolling lists to allow users to explore options provided by your Action.</p><p>However, we recommend that you avoid the use of buttons and other GUI elements if there is a better way to use graphics to directly present items or choices. For example, if you want the user to select from one of two game items, show a graphic of each and then highlight the items to make it clear that the user can select either of the items by voice or touch.</p><h3>Media</h3><p>Your Interactive Canvas web app can also use powerful media APIs supported by modern web browsers. The <a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement">HTML media element</a> allows you to play both audio and video. The media element gives you fine-grained control over the playback and allows you to track various media events.</p><p>Sound effects can be a great way to provide feedback and set the atmosphere. Delight your users with special sounds when they reach goals in the game or find easter eggs and buried treasure.</p><p>Use the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a> to provide background music and feedback. With this API, you can play with effects and filters and mix sounds and music from various sources.</p><p>To help you increase the production value of your Action, we provide thousands of sounds in our <a href="https://developers.google.com/actions/tools/sound-library/">sound library</a>, which we host for you for free.</p><p>You can also use <a href="https://developers.google.com/actions/reference/ssml">SSML</a> in the intent responses and create layered sounds and music using the &lt;par&gt; tag, which is a unique feature of our platform.</p><h3>Speech synchronization</h3><p>Canvas Actions support the standard SSML <a href="https://www.w3.org/TR/speech-synthesis11/#S3.3.2">&lt;mark&gt;</a> tag, which allows you to place custom markers into the text sequence for the TTS:</p><pre>&lt;speak&gt;<br>  Go from &lt;mark name=&quot;here&quot;/&gt; here,<br>  to &lt;mark name=&quot;there&quot;/&gt; there!<br>&lt;/speak&gt;</pre><p>As the TTS is played back, events are generated for each mark, which can then be used in your web app logic to synchronize state updates and animations. The platform automatically provides ‘START’ and ‘END’ events for every TTS.</p><p>This feature can be used to highlight words as they are spoken or display graphics that match items as they are mentioned in the TTS.</p><p><em>Note: Support for custom marks will be rolling out in the next few months.</em></p><p><em>Update: We’ve added support for </em><a href="https://developers.google.com/assistant/interactivecanvas/reference/interactivecanvas#onttsmark"><em>custom marks</em></a><em> to synchronize Interactive Canvas animations with SSML events.</em></p><h3>Actions features</h3><p>Canvas Actions can leverage all of the existing APIs and features provided for conversational Actions.</p><p>Canvas web apps do have some restrictions like not being able to use cookies or persist data in local storage. However, if you need to persist data within a session, like tracking the number of turns in a game, you can use <a href="https://developers.google.com/actions/assistant/save-data#save_data_between_turns_of_a_conversation">conv.data</a> within your fulfillment logic. If you want to store data between sessions, like tracking the highest score, then use <a href="https://developers.google.com/actions/assistant/save-data#save_data_across_conversations">conv.user.storage</a> to persist data per user. If you need to sync game state across different platforms, you can use a cloud database, for example <a href="https://firebase.google.com/docs/firestore">Firestore</a>.</p><p>The Actions <a href="https://developers.google.com/actions/tools/simulator">simulator</a> fully supports testing your Canvas Actions, and you can use the <a href="https://developers.google.com/web/tools/chrome-devtools/">Chrome DevTools</a> to inspect the DOM, debug your JavaScript code, and optimize the performance.</p><p>You can also leverage <a href="https://developers.google.com/actions/assistant/updates/daily">daily updates</a>, <a href="https://developers.google.com/actions/assistant/updates/routines">routine suggestions</a>, and <a href="https://developers.google.com/actions/assistant/updates/notifications">notifications</a> to increase engagement with your users.</p><p>Earn an income by using our <a href="https://developers.google.com/actions/transactions/digital/dev-guide-digital">digital purchases API</a> to provide different kinds of digital goods to your users, such as one-time purchases or subscriptions.</p><h3>Game design</h3><p>We encourage you to think about new interaction models that weren’t possible on other platforms.</p><p>We have a few design ideas to help you be successful with your game:</p><ul><li><strong>Interactive visuals</strong>: Take advantage of the visual display in your core game experience by using visual information that users can respond to with their voice.</li><li><strong>Voice forward</strong>: Explore experiences where voice is the right input. Good examples include adventure games, visual puzzles solved by voice, and conversations with game characters to unlock new gameplay experiences.</li><li><strong>Shared space device</strong>: Smart displays enable local multiplayer games for families to play in a central location. Persistent games allow multiple people to collaborate with a shared game state.</li></ul><figure><img alt="Shared gaming experiences to solve visual games." src="https://cdn-images-1.medium.com/max/600/0*vk8DYt1gtbAnXgGO" /></figure><p>It’s early days for these kinds of games and we encourage you to experiment with new ideas and game designs. But mostly, just have fun! Interactive Canvas is an exciting opportunity for you to create voice-enabled games that are complemented with rich visuals and implemented in a language you already know — regular HTML.</p><h3>Next steps</h3><p>To get started with Canvas, take a look at our <a href="https://github.com/actions-on-google/dialogflow-interactive-canvas-nodejs">basic sample</a> or our <a href="https://github.com/actions-on-google/dialogflow-snowman-nodejs">simple game</a> that we have open sourced on Github. Make sure to read our <a href="https://developers.google.com/actions/interactivecanvas/">docs</a> and watch our introductory <a href="https://www.youtube.com/watch?v=wH-DVAoCQN0">video</a> to learn about the basics of Interactive Canvas.</p><p>Let the games begin!</p><p><em>Read our </em><a href="https://medium.com/google-developers/optimize-your-web-apps-for-interactive-canvas-18f8645f8382"><em>next post</em></a><em> on Interactive Canvas.</em></p><p><em>To share your thoughts or questions, join us on Reddit at </em><a href="https://www.reddit.com/r/GoogleAssistantDev/"><em>/r/GoogleAssistantDev</em></a><em>. Follow </em><a href="https://twitter.com/ActionsOnGoogle"><em>@ActionsOnGoogle</em></a><em> on Twitter for more of our team’s updates, and tweet using #AoGDevs to share what you’re working on.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b24ec30d2e31" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/create-rich-immersive-google-assistant-games-with-interactive-canvas-b24ec30d2e31">Create rich, immersive Google Assistant Games with Interactive Canvas</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>