<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Google Developers - Medium]]></title>
        <description><![CDATA[Engineering and technology articles for developers, written and curated by Googlers. The views expressed are those of the authors and don’t necessarily reflect those of Google. - Medium]]></description>
        <link>https://medium.com/google-developers?source=rss----2e5ce7f173a5---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 12 Jun 2017 08:05:56 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/google-developers" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Re-animation]]></title>
            <link>https://medium.com/google-developers/re-animation-7869722af206?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/7869722af206</guid>
            <category><![CDATA[android-app-development]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[animation]]></category>
            <dc:creator><![CDATA[Nick Butcher]]></dc:creator>
            <pubDate>Fri, 09 Jun 2017 13:05:36 GMT</pubDate>
            <atom:updated>2017-06-09T13:05:36.039Z</atom:updated>
            <content:encoded><![CDATA[<p>In a previous article, I described a technique for creating vector animations on Android:</p><p><a href="https://medium.com/google-developers/animation-jump-through-861f4f5b3de4">Animation: Jump-through</a></p><p>At the time of writing, part of this technique (path morphing) was only supported on Lollipop and newer versions of the OS. <a href="https://developer.android.com/topic/libraries/support-library/revisions.html#25-4-0">Version 25.4</a> of the Android Support Library however, back-ported this capability all the way back to Ice Cream Sandwich (i.e. an impressive <a href="https://developer.android.com/about/dashboards/index.html#Platform">99% of devices</a>). Check out this Google I/O session covering this and other awesome changes in the Support Library:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FV6-roIeNUY0%3Fstart%3D1503%26feature%3Doembed%26start%3D1503&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DV6-roIeNUY0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FV6-roIeNUY0%2Fhqdefault.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/effa3654673fa875183860e060f0b92a/href">https://medium.com/media/effa3654673fa875183860e060f0b92a/href</a></iframe><p>I wanted to try out these new capabilities by updating my example to work on older devices and share my findings.</p><h4>14 is the new 21</h4><p>The first step was to simply change my minSdkVersion from 21 to 14. Opening up the exact same AnimatedVectorDrawable, the Lint tool pointed out a couple of errors. Specifically I was using some of the standard Material interpolators which were only introduced in API 21 so wouldn’t be available on older platforms. As the Support Library also back-ports PathInterpolators I simply copied their implementations from the platform into my project and referenced these instead.</p><p>I also set vectorDrawables.useSupportLibrary = true in my build.gradle file to <a href="https://medium.com/@chrisbanes/appcompat-v23-2-age-of-the-vectors-91cbafa87c88">tell the build toolchain</a> not to strip away vector resources on older devices.</p><h4>Under construction</h4><p>To play the animation, you need to get a reference to it in code and call start(). There are multiple ways that you can actually create the animated drawable:</p><ol><li>Reference it in your layout (app:srcCompat=&quot;@drawable/avd_foo&quot;) and later retrieve the drawable from the ImageView.</li><li>Use <a href="https://developer.android.com/reference/android/support/v7/content/res/AppCompatResources.html#getDrawable(android.content.Context, int)">AppCompatResources#getDrawable</a></li><li>Use <a href="https://developer.android.com/reference/android/support/graphics/drawable/AnimatedVectorDrawableCompat.html#create(android.content.Context, int)">AnimatedVectorDrawableCompat#create</a></li></ol><p>I found that methods 1 &amp; 2 can return different concrete classes; either an AnimatedVectorDrawable or an AnimatedVectorDrawable<strong>Compat</strong> depending upon which API you find yourself on.</p><blockquote>Interestingly the support library <a href="https://android.googlesource.com/platform/frameworks/support/+/master/v7/appcompat/src/android/support/v7/widget/AppCompatDrawableManager.java#92">currently uses</a> the native version on API 24+ and the compat version prior despite the class being introduced in API 21. This enables it to supply bug fixes to APIs 21–23.</blockquote><p>This can be problematic if/when you need to cast the drawable.</p><blockquote>Note that both classes implement <a href="https://developer.android.com/reference/android/graphics/drawable/Animatable.html">Animatable</a> so if all you need is to start/stop it then you can cast away safely. Additionally AnimatedVectorDrawableCompat offers a handy static method to <a href="https://developer.android.com/reference/android/support/graphics/drawable/AnimatedVectorDrawableCompat.html#registerAnimationCallback(android.graphics.drawable.Drawable, android.support.graphics.drawable.Animatable2Compat.AnimationCallback)">register callbacks</a>, which will check which type we’re dealing with and delegate the callback as appropriate.</blockquote><p>Instead I opted for door number 3; always using the compat class. This might add a tiny bit of overhead (as on newer platforms AVDC just delegates everything to the native class) but it made my consuming code simpler.</p><h4>Call me back</h4><p>One wrinkle I found was with the technique I used to make the animation loop. Unfortunately AnimatorSets do not support repeating, so I worked around this by adding an AnimationCallback which listens for the end of the animation and calls start again. This did not work on older platforms but I was able to work around it by posting the start call on a handler to be executed after the end callback:</p><pre>avd?.registerAnimationCallback(<br>        object : Animatable2Compat.AnimationCallback() {<br>    override fun onAnimationEnd(drawable: Drawable?) {<br>        imageView.<strong>post</strong> {<strong> </strong>avd.start() }<br>    }<br>})</pre><h4>Stale state</h4><p>Parts of the animation only run at certain points within the loop; for example the dots fade in/out when they enter/exit the scene. On older devices I found that their ‘state’ wasn’t being reset (to how it was defined in the VectorDrawable) on each loop.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*SYFNYDLLY8Y0Jcs_tGl1WA.gif" /></figure><p>Notice how the grey dots are (incorrectly) visible when they enter from the right, then briefly disappear and <em>then</em> fade in. To fix this I added a zero length animation to set properties to their expected value at the start of each loop so that they’re ready to be animated e.g.:</p><pre>&lt;!-- Fade dot 5 in as it enters the scene --&gt;<br>&lt;set&gt;<br>    &lt;objectAnimator<br>        android:propertyName=&quot;fillAlpha&quot;<br>        android:valueFrom=&quot;0&quot;<br>        android:<strong>valueTo=&quot;0&quot;</strong><br>        android:<strong>duration=&quot;0&quot;</strong> /&gt;<br>    &lt;objectAnimator<br>        android:propertyName=&quot;fillAlpha&quot;<br>        android:valueFrom=&quot;0&quot;<br>        android:valueTo=&quot;1&quot;<br>        android:startOffset=&quot;1900&quot;<br>        android:duration=&quot;60&quot;<br>        android:interpolator=&quot;@android:interpolator/linear&quot; /&gt;<br>&lt;/set&gt;</pre><h4>Form a queue</h4><p>The last issue I hit was a problem with <em>sequentially</em> ordered AnimatorSets. The main pin jump animation is a sequence of path morph animations, from keyframe to keyframe. My animation assumed that this sequence would run for the sum of all of the individual animator’s durations. On older platforms however, a bug causes each animator to wait for the next frame boundary before starting. These small delays add up such that the animation took <em>longer</em> than the sum of durations, so other parts of the composition would be mis-timed. I was able to work around this by switching to ordering=&quot;together&quot; instead and using startOffsets on each individual animator to start them at the right time.</p><h4>Impressively unimpressive</h4><p>The end result is extremely impressive in it’s un-impressiveness. That is, the animation looks exactly the same as before but now runs on many more devices.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/468/1*S4m8qfUcavdD3qUBMgC1Jg.gif" /><figcaption>Animation running on API 16. #holoyolo.</figcaption></figure><p>Even though I did encounter some issues getting this to work on older devices, they were all pretty easily resolved and I think that the ability to run on so many more API levels makes the effort <em>well</em> worth it.</p><p>I was pleased with how many things just worked including the XML bundle format which allows you to specify the VectorDrawable and animations in a single file. Lint tooling was also helpful in pointing out some problems. You can find my code for the back-ported animation <a href="https://gist.github.com/nickbutcher/b1806905c6bc0ef29f545fd580935bd3">here on Github</a>.</p><p>If you were holding off adding awesome animations to your application because of lack of API support, then hold-off-no-more. If you’re looking into path-morphing animations then be sure to check out <a href="https://medium.com/u/f61fb1c467cd">Alex Lockwood</a>’s amazing <a href="https://shapeshifter.design/">ShapeShifter</a> tool which will help you to create morph-able shapes. If this has inspired you to create something, then <a href="http://twitter.com/crafty">let me know</a>!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7869722af206" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/re-animation-7869722af206">Re-animation</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Lifecycle Aware Data Loading with Android Architecture Components]]></title>
            <link>https://medium.com/google-developers/lifecycle-aware-data-loading-with-android-architecture-components-f95484159de4?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/f95484159de4</guid>
            <category><![CDATA[android-app-development]]></category>
            <category><![CDATA[androiddev]]></category>
            <category><![CDATA[architecture-components]]></category>
            <dc:creator><![CDATA[Ian Lake]]></dc:creator>
            <pubDate>Wed, 17 May 2017 23:01:01 GMT</pubDate>
            <atom:updated>2017-05-20T22:30:00.539Z</atom:updated>
            <content:encoded><![CDATA[<h3>Lifecycle Aware Data Loading with Architecture Components</h3><p>In my <a href="https://medium.com/google-developers/making-loading-data-on-android-lifecycle-aware-897e12760832">previous blog post</a>, I talked about how you can use <a href="https://developer.android.com/guide/components/loaders.html">Loaders</a> to load data in a way that automatically handles configuration changes.</p><p>With the introduction of <a href="https://developer.android.com/topic/libraries/architecture/index.html">Architecture Components</a>, there’s an alternative that provides a modern, flexible, and testable solution to this use case.</p><h3>Separation of concerns</h3><p>Two of the largest benefits of Loaders were:</p><ul><li>They encapsulate the process of data loading</li><li>They survive configuration changes, preventing unnecessarily reloading data</li></ul><p>With Architecture Components, these two benefits are now handled by two separate classes:</p><ul><li><a href="https://developer.android.com/topic/libraries/architecture/livedata.html">LiveData</a> provides a lifecycle aware base class for encapsulating loading data</li><li><a href="https://developer.android.com/topic/libraries/architecture/viewmodel.html">ViewModels</a> are automatically retained across configuration changes</li></ul><p>One significant advantage of this separation is that you can reuse the same LiveData in multiple ViewModels, compose multiple LiveData sources together through a <a href="https://developer.android.com/reference/android/arch/lifecycle/MediatorLiveData.html">MediatorLiveData</a>, or use them in a Service, avoiding the effort of trying to munge a Loader into a scenario where you don’t have a LoaderManager.</p><p>While Loaders espoused a separation between your UI and data loading (one of the first steps to a testable app!), this model expands on that advantage — your ViewModel can be completely tested by mocking out your data sources and the LiveData can be tested in complete isolation. A clean, testable architecture was a large focus in the <a href="https://developer.android.com/topic/libraries/architecture/guide.html">Guide to App Architecture</a>.</p><h3>Keep it simple</h3><p>That all sounds good in theory. An illustrative example <a href="https://medium.com/google-developers/making-loading-data-on-android-lifecycle-aware-897e12760832#5aa5">recreating our </a><a href="https://medium.com/google-developers/making-loading-data-on-android-lifecycle-aware-897e12760832#5aa5">AsyncTaskLoader</a> might help make the ideas a bit more concrete:</p><pre>public class JsonViewModel extends AndroidViewModel {<br>  // You probably have something more complicated<br>  // than just a String. Roll with me<br>  private final MutableLiveData&lt;List&lt;String&gt;&gt; data =<br>      new MutableLiveData&lt;List&lt;String&gt;&gt;();</pre><pre>  public JsonViewModel(Application application) {<br>    super(application);<br>    loadData();<br>  }</pre><pre>  public LiveData&lt;List&lt;String&gt;&gt; getData() {<br>    return data;<br>  }</pre><pre>  private void loadData() {<br>    new AsyncTask&lt;Void,Void,List&lt;String&gt;&gt;() {<br>      @Override<br>      protected List&lt;String&gt; doInBackground(Void… voids) {<br>        File jsonFile = new File(getApplication().getFilesDir(),<br>            &quot;downloaded.json&quot;);<br>        List&lt;String&gt; data = new ArrayList&lt;&gt;();<br>        // Parse the JSON using the library of your choice<br>        return data;<br>      }</pre><pre>      @Override<br>      protected void onPostExecute(List&lt;String&gt; data) {<br>        this.data.setValue(data);<br>      }<br>    }.execute();<br>  }<br>}</pre><p>Wait, an AsyncTask? How is this safe? There’s two safety features in play here:</p><ol><li>The <a href="https://developer.android.com/reference/android/arch/lifecycle/AndroidViewModel.html">AndroidViewModel</a> (a subclass of ViewModel) only has a reference to the application Context, so we’re very importantly not referencing the Context of an Activity, etc. that could present a leak — there’s even a Lint check to avoid these kind of issues.</li><li>LiveData only delivers the results if there’s something observing it</li></ol><p>But we haven’t quite captured the essence of the Architecture Components: our ViewModel is directly building and managing our LiveData.</p><pre>public class JsonViewModel extends AndroidViewModel {<br>  private final JsonLiveData data;</pre><pre>  public JsonViewModel(Application application) {<br>    super(application);<br>    data = new JsonLiveData(application);<br>  }</pre><pre>  public LiveData&lt;List&lt;String&gt;&gt; getData() {<br>    return data;<br>  }<br>}</pre><pre>public class JsonLiveData extends LiveData&lt;List&lt;String&gt;&gt; {<br>  private final Context context;</pre><pre>  public JsonLiveData(Context context) {<br>    this.context = context;<br>    loadData();<br>  }</pre><pre>  private void loadData() {<br>    new AsyncTask&lt;Void,Void,List&lt;String&gt;&gt;() {<br>      @Override<br>      protected List&lt;String&gt; doInBackground(Void… voids) {<br>        File jsonFile = new File(getApplication().getFilesDir(),<br>            &quot;downloaded.json&quot;);<br>        List&lt;String&gt; data = new ArrayList&lt;&gt;();<br>        // Parse the JSON using the library of your choice<br>        return data;<br>      }</pre><pre>      @Override<br>      protected void onPostExecute(List&lt;String&gt; data) {<br>        setValue(data);<br>      }<br>    }.execute();<br>  }<br>}</pre><p>So our ViewModel gets considerably simpler, as you’d expect. Our LiveData now completely encapsulates the loading process, loading the data only once.</p><h3>Data changes in a LiveData world</h3><p>Just like how <a href="https://medium.com/google-developers/making-loading-data-on-android-lifecycle-aware-897e12760832#1e8c">Loaders can react to changes elsewhere</a>, this same functionality is key when working with LiveData — as the name implies, the data is expected to change! We can easily rework our class to continue to load data while there’s an observer:</p><pre>public class JsonLiveData extends LiveData&lt;List&lt;String&gt;&gt; {<br>  private final Context context;<br>  private final FileObserver fileObserver;</pre><pre>public JsonLiveData(Context context) {<br>    this.context = context;<br>    String path = new File(context.getFilesDir(),<br>        &quot;downloaded.json&quot;).getPath();<br>    fileObserver = new FileObserver(path) {<br>      @Override<br>      public void onEvent(int event, String path) {<br>        // The file has changed, so let’s reload the data<br>        loadData();<br>      }<br>    };<br>    loadData();<br>  }</pre><pre>  @Override<br>  protected void onActive() {<br>    fileObserver.startWatching();<br>  }</pre><pre>  @Override<br>  protected void onInactive() {<br>    fileObserver.stopWatching();<br>  }</pre><pre>  private void loadData() {<br>    new AsyncTask&lt;Void,Void,List&lt;String&gt;&gt;() {<br>      @Override<br>      protected List&lt;String&gt; doInBackground(Void… voids) {<br>        File jsonFile = new File(getApplication().getFilesDir(),<br>            &quot;downloaded.json&quot;);<br>        List&lt;String&gt; data = new ArrayList&lt;&gt;();<br>        // Parse the JSON using the library of your choice<br>        return data;<br>      }</pre><pre>      @Override<br>      protected void onPostExecute(List&lt;String&gt; data) {<br>        setValue(data);<br>      }<br>    }.execute();<br>  }<br>}</pre><p>Now that we’re interested in listening to changes, we can take advantage of LiveData’s onActive() and onInactive() callbacks to only listen when there’s an active observer on our data — as long as someone is observing, they can be guaranteed to get the latest data.</p><h3>Observing data</h3><p>In the Loader world, getting your data to your UI would involve a LoaderManager, calling initLoader() in the right place, and building a LoaderCallbacks. The world is a bit more straightforward in the Architecture Components world.</p><p>There’s two things we need to do:</p><ol><li>Get a reference to our ViewModel</li><li>Start observing our LiveData</li></ol><p>But explaining that is almost as long as the code itself:</p><pre>public class MyActivity extends AppCompatActivity {<br>  public void onCreate(Bundle savedInstanceState) {<br>    super.onCreate(savedInstanceState);<br>    JsonViewModel model =<br>        ViewModelProviders.of(this).get(JsonViewModel.class);<br>    model.getData().observe(this, data -&gt; {<br>      // update UI<br>    });<br>  }<br>}</pre><p>You’ll note there’s no need to clean things up after the fact: ViewModels automatically live only as long as they are needed and LiveData automatically only passes you data when calling Activity/Fragment/<a href="https://developer.android.com/topic/libraries/architecture/lifecycle.html#lco">LifecycleOwner</a> is started or resumed.</p><h3>Load all the things</h3><p>Now, if you’re still in the vehemently-against-AsyncTask camp, I’m totally okay with that: LiveData is a lot more flexible than being tied to only that construct.</p><p>For example, <a href="https://developer.android.com/topic/libraries/architecture/room.html">Room</a> lets you have <a href="https://developer.android.com/topic/libraries/architecture/room.html#daos-query-observable">observerable queries</a> — database queries that return LiveData so that database changes automatically propagate up through your ViewModel to your UI. Kind of like a CursorLoader without touching Cursors or Loaders.</p><p>We can also rewrite the <a href="https://medium.com/google-developers/making-loading-data-on-android-lifecycle-aware-897e12760832#85b1">FusedLocationApi example</a> with a LiveData class:</p><pre>public class LocationLiveData extends LiveData&lt;Location&gt; implements<br>    GoogleApiClient.ConnectionCallbacks,<br>    GoogleApiClient.OnConnectionFailedListener,<br>    LocationListener {<br>  private GoogleApiClient googleApiClient;</pre><pre>  public LocationLiveData(Context context) {<br>    googleApiClient =<br>      new GoogleApiClient.Builder(context, this, this)<br>      .addApi(LocationServices.API)<br>      .build();<br>  }</pre><pre>  @Override<br>  protected void onActive() {<br>    // Wait for the GoogleApiClient to be connected<br>    googleApiClient.connect();<br>  }</pre><pre>  @Override<br>  protected void onInactive() {<br>    if (googleApiClient.isConnected()) {<br>      LocationServices.FusedLocationApi.removeLocationUpdates(<br>          googleApiClient, this);<br>    }<br>    googleApiClient.disconnect();<br>  }</pre><pre>  @Override<br>  public void onConnected(Bundle connectionHint) {<br>    // Try to immediately find a location<br>    Location lastLocation = LocationServices.FusedLocationApi<br>        .getLastLocation(googleApiClient);<br>    if (lastLocation != null) {<br>      setValue(lastLocation);<br>    }</pre><pre>    // Request updates if there’s someone observing<br>    if (hasActiveObservers()) {<br>      LocationServices.FusedLocationApi.requestLocationUpdates(<br>          googleApiClient, new LocationRequest(), this);<br>    }<br>  }</pre><pre>  @Override<br>  public void onLocationChanged(Location location) {<br>    // Deliver the location changes<br>    setValue(location);<br>  }</pre><pre>  @Override<br>  public void onConnectionSuspended(int cause) {<br>    // Cry softly, hope it comes back on its own<br>  }</pre><pre>  @Override<br>  public void onConnectionFailed(<br>      @NonNull ConnectionResult connectionResult) {<br>    // Consider exposing this state as described here:<br>    // <a href="https://d.android.com/topic/libraries/architecture/guide.html#addendum">https://d.android.com/topic/libraries/architecture/guide.html#addendum</a><br>  }<br>}</pre><h3>Just scratching the surface of the Architecture Components</h3><p>There’s a lot more to the Android Architecture Components, so make sure to check out <a href="https://developer.android.com/topic/libraries/architecture/index.html">all of the documentation</a>.</p><p>I’d personally strongly recommend reading through the entire <a href="https://developer.android.com/topic/libraries/architecture/guide.html">Guide to App Architecture</a> to give you an idea on how all of these components come together to form a solid architecture for your entire app.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f95484159de4" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/lifecycle-aware-data-loading-with-android-architecture-components-f95484159de4">Lifecycle Aware Data Loading with Android Architecture Components</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Developer Show — TL;DR 069]]></title>
            <link>https://medium.com/google-developers/the-developer-show-tl-dr-069-e28b72620835?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/e28b72620835</guid>
            <category><![CDATA[makers]]></category>
            <category><![CDATA[chrome]]></category>
            <category><![CDATA[google-cloud-platform]]></category>
            <category><![CDATA[containers]]></category>
            <category><![CDATA[google-io-2017]]></category>
            <dc:creator><![CDATA[timothyjordan]]></dc:creator>
            <pubDate>Mon, 15 May 2017 17:46:06 GMT</pubDate>
            <atom:updated>2017-05-15T17:49:56.573Z</atom:updated>
            <content:encoded><![CDATA[<p>Highlights: Google I/O, AIY Maker Kit, Chrome 59 Beta</p><p>The Developer Show is where you can stay up to date on all the latest Google Developer news, straight from the experts.</p><p>Have a question? Use #AskDevShow to let us know!</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLOU2XLYxmsII8REpkzsy1bJHj6G1WEVA1&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D0IwJEUoetQQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F0IwJEUoetQQ%2Fhqdefault.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/9f6d06aa1015e9db431d33974643ce86/href">https://medium.com/media/9f6d06aa1015e9db431d33974643ce86/href</a></iframe><h4><a href="https://android-developers.googleblog.com/2017/05/build-portfolio-of-apps-as-you-improve.html?utm_campaign=devshow_series_androiddevelopment_051217&amp;utm_source=medium&amp;utm_medium=blog">Advanced Android App Development</a></h4><p>The Advanced Android App Development online course has been updated, improved, and extended. With it, you can build a portfolio of apps as you improve your Android dev skills. Check out the the course, <a href="https://android-developers.googleblog.com/2017/05/build-portfolio-of-apps-as-you-improve.html?utm_campaign=devshow_series_androiddevelopment_051217&amp;utm_source=medium&amp;utm_medium=blog">linked on the post</a>.</p><h4><a href="https://developers.googleblog.com/2017/05/aiy-projects-voice-kit.html?utm_campaign=devshow_series_aiy_051217&amp;utm_source=medium&amp;utm_medium=blog">AIY Projects: Do-it-yourself AI for Makers</a></h4><p>We recently launched AIY Projects: do-it-yourself artificial intelligence for Makers. With it, makers can use artificial intelligence to make human-to-machine interaction more like human-to-human interactions. We’ll be releasing a series of reference kits, starting with voice recognition. <a href="https://developers.googleblog.com/2017/05/aiy-projects-voice-kit.html?utm_campaign=devshow_series_aiy_051217&amp;utm_source=medium&amp;utm_medium=blog">More details and links are on the post</a>.</p><h4><a href="https://blog.chromium.org/2017/05/chrome-59-beta-headless-chromium-native.html?utm_campaign=devshow_series_chrome59beta_051217&amp;utm_source=medium&amp;utm_medium=blog">Chrome 59 Beta</a></h4><p>Chrome 59 Beta is now available with Headless Chromium, native notifications on macOS, service worker navigation preload, and more. <a href="https://blog.chromium.org/2017/05/chrome-59-beta-headless-chromium-native.html?utm_campaign=devshow_series_chrome59beta_051217&amp;utm_source=medium&amp;utm_medium=blog">All the details are on the post</a>.</p><h4><a href="https://cloudplatform.googleblog.com/2017/05/Google-Cloud-Launcher-adds-more-container-support.html?utm_campaign=devshow_series_googlecloudlauncher_051217&amp;utm_source=medium&amp;utm_medium=blog">Google Cloud Launcher adds more container support</a></h4><p>Google Cloud Launcher has more Google maintained containers including Cassandra, ElasticSearch, Jenkins, MySQL, and more. Google container solutions are managed by Google engineers and since we’re maintaining the images, the containers available on Google Cloud Launcher will be current with the latest application and security updates.</p><h4><a href="https://cloudplatform.googleblog.com/2017/05/Cloud-Natural-Language-API-enters-beta.html?utm_campaign=devshow_series_googlecloud_051217&amp;utm_source=medium&amp;utm_medium=blog">Google Cloud</a></h4><p>There are two announcements from Google Cloud Next London that I wanted to tell you about… First, Google Cloud Natural Language API is adding support for new languages and entity sentiment analysis. And second, Cloud Spanner is now generally available. <a href="https://cloudplatform.googleblog.com/2017/05/Cloud-Natural-Language-API-enters-beta.html?utm_campaign=devshow_series_googlecloud_051217&amp;utm_source=medium&amp;utm_medium=blog">Check out the details of both announcements on the post</a>.</p><h4><a href="https://developers.googleblog.com/2017/05/google-io-2017-on-your-mobile-devices.html?utm_campaign=devshow_series_googleio_051217&amp;utm_source=medium&amp;utm_medium=blog">Google I/O</a></h4><p>Google I/O is juuuuust around the corner — and if you’re like me, you like to go in prepared. Which is why we have an Android, iOS, and web app to help you customize your I/O schedule and get around the developer festival. <a href="https://developers.googleblog.com/2017/05/google-io-2017-on-your-mobile-devices.html?utm_campaign=devshow_series_googleio_051217&amp;utm_source=medium&amp;utm_medium=blog">Check out the screenshots and find the download links on the post</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e28b72620835" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/the-developer-show-tl-dr-069-e28b72620835">The Developer Show — TL;DR 069</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Developer Show — TL;DR 068]]></title>
            <link>https://medium.com/google-developers/the-developer-show-tl-dr-068-e6c54bc18dce?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/e6c54bc18dce</guid>
            <category><![CDATA[google-cloud-platform]]></category>
            <category><![CDATA[https]]></category>
            <category><![CDATA[google-assistant]]></category>
            <category><![CDATA[cloud-endpoints]]></category>
            <category><![CDATA[security]]></category>
            <dc:creator><![CDATA[timothyjordan]]></dc:creator>
            <pubDate>Tue, 09 May 2017 15:15:42 GMT</pubDate>
            <atom:updated>2017-05-09T15:15:42.723Z</atom:updated>
            <content:encoded><![CDATA[<p>Highlights: Google Assistant SDK, increased connection security, and Google Cloud Endpoints</p><p>The Developer Show is where you can stay up to date on all the latest Google Developer news, straight from the experts.</p><p>Have a question? Use #AskDevShow to let us know!</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLOU2XLYxmsII8REpkzsy1bJHj6G1WEVA1&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dp9mEryD_TvY&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fp9mEryD_TvY%2Fhqdefault.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/7c56d83e9acc2bff9d4c36bfa1e19a08/href">https://medium.com/media/7c56d83e9acc2bff9d4c36bfa1e19a08/href</a></iframe><h4><a href="https://developers.googleblog.com/2017/04/introducing-google-assistant-sdk.html?utm_campaign=devshow_series_googleassistantsdk_050517&amp;utm_source=medium&amp;utm_medium=blog">Google Assistant SDK</a></h4><p>Bring voice control, natural language understanding, Google’s smarts, and more to your devices using the Google Assistant SDK. The developer preview is now available. <a href="https://developers.googleblog.com/2017/04/introducing-google-assistant-sdk.html?utm_campaign=devshow_series_googleassistantsdk_050517&amp;utm_source=medium&amp;utm_medium=blog">Get the some sample code and videos from the post</a>.</p><h4><a href="https://maps-apis.googleblog.com/2017/04/introducing-structured-menus-in-google.html?utm_campaign=devshow_series_googlemybusinessapi_050517&amp;utm_source=medium&amp;utm_medium=blog">Structured menus in Google My Business API</a></h4><p>Businesses that use the Google My Business API can now publish their entire menu to Google — itemized with descriptions, photos and prices — making it frictionless for their customers to view their menus on Google. <a href="https://maps-apis.googleblog.com/2017/04/introducing-structured-menus-in-google.html?utm_campaign=devshow_series_googlemybusinessapi_050517&amp;utm_source=medium&amp;utm_medium=blog">Screenshots and sample code are on the post</a>.</p><h4><a href="https://gsuite-developers.googleblog.com/2017/04/create-quizzes-in-google-forms-with.html?utm_campaign=devshow_series_quizzesgoogleforms_050517&amp;utm_source=medium&amp;utm_medium=blog">Create quizzes in Google Forms with Apps Script</a></h4><p>Quizzes in Google Forms help teachers automate testing and give feedback to students faster by having Forms check responses against correct answers automatically. And now you can create these quizzes programmatically with Apps Script. <a href="https://gsuite-developers.googleblog.com/2017/04/create-quizzes-in-google-forms-with.html?utm_campaign=devshow_series_quizzesgoogleforms_050517&amp;utm_source=medium&amp;utm_medium=blog">More screenshots and code are on the post</a>.</p><h4><a href="https://blog.chromium.org/2017/04/next-steps-toward-more-connection.html?utm_campaign=devshow_series_connectionsecurity_050517&amp;utm_source=medium&amp;utm_medium=blog">Next steps toward more connection security</a></h4><p>Beginning in October 2017, Chrome will show the “Not secure” warning in two more situations: when users enter data on an HTTP page, and on all HTTP pages visited in Incognito mode. Which makes it a good time for me to say: HTTPS is easier and cheaper than ever before, and it enables both the best performance the web offers and powerful new features that are too sensitive for HTTP. Check out our set-up guides to get started. <a href="https://blog.chromium.org/2017/04/next-steps-toward-more-connection.html?utm_campaign=devshow_series_connectionsecurity_050517&amp;utm_source=medium&amp;utm_medium=blog">Everything’s linked from the post</a>.</p><h4><a href="https://cloudplatform.googleblog.com/2017/04/manage-your-gRPC-APIs-with-Google-Cloud-Endpoints.html?utm_campaign=devshow_series_grpcapis_050517&amp;utm_source=medium&amp;utm_medium=blog">Manage gRPC APIs with Google Cloud Endpoints</a></h4><p>You can define and run a gRPC API and serve both gRPC and JSON-HTTP/1.1 to your clients using Google Cloud Endpoints! <a href="https://cloudplatform.googleblog.com/2017/04/manage-your-gRPC-APIs-with-Google-Cloud-Endpoints.html?utm_campaign=devshow_series_grpcapis_050517&amp;utm_source=medium&amp;utm_medium=blog">More details are on the post</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e6c54bc18dce" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/the-developer-show-tl-dr-068-e6c54bc18dce">The Developer Show — TL;DR 068</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to generate fabulous API reference docs for iOS]]></title>
            <link>https://medium.com/google-developers/how-to-generate-fabulous-api-reference-docs-for-ios-5bd297b9697d?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/5bd297b9697d</guid>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[ios-app-development]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Ibrahim Ulukaya]]></dc:creator>
            <pubDate>Tue, 25 Apr 2017 23:41:59 GMT</pubDate>
            <atom:updated>2017-04-25T23:43:46.432Z</atom:updated>
            <content:encoded><![CDATA[<p>At Firebase, we were using <a href="http://www.stack.nl/~dimitri/doxygen/">Doxygen</a> to generate <a href="https://firebase.google.com/docs/reference/ios/firebasecore/api/reference/Classes">iOS API reference docs</a>. While it did the job for ObjectiveC, it wasn’t supporting Swift. We started using <a href="https://github.com/realm/jazzy">Jazzy</a> for the look and feel of Apple’s official reference documentation. The documentation which iOS developers are used to.</p><p><a href="https://github.com/realm/jazzy">jazzy</a> is a command-line utility that generates documentation for Swift or Objective-C.<em> </em>It<em> </em>is composed of two parts:</p><ol><li>The parser, <a href="https://github.com/jpsim/SourceKitten">SourceKitten</a> (written in Swift)</li><li>The site generator (written in ruby)</li></ol><p>It leverages</p><ul><li>modern HTML templating (<a href="http://mustache.github.io/">Mustache</a>)</li><li>the power and accuracy of the <a href="http://clang.llvm.org/docs/IntroductionToTheClangAST.html">Clang AST</a> and <a href="http://www.jpsim.com/uncovering-sourcekit">SourceKit</a></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LP1jnAzlUxe6wxYFDaX5WQ.jpeg" /><figcaption>Firebase iOS API Reference documentation</figcaption></figure><p>In our setup, we have a shell script. It</p><ul><li>simplifies jazzy setup,</li><li>unifies through different modules we have,</li><li>sets mustache templates.</li></ul><p>In our first try, we were awed by the beautiful documentation right of the batch. But, we saw couple issues. These were originating because Jazzy was <strong>only accepting ObjectiveC/Swift style comments</strong>. Whereas Doxygen was supporting C++ style comments. Before <a href="https://medium.com/@ulukaya/how-to-migrate-ios-api-reference-from-doxygen-to-jazzy-35bd89c6a92b">diving down to each issue</a>, let me explain how we tackled them:</p><ol><li>We created a Jazzy style guide for comments for future development.</li><li>We used Jazzy’s--skip-documentation parameter to find out the missing or broken comments. We integrated this parameter into our script as well for due diligence.</li><li>We worked with engineers to update the comments into new syntax as soon as we can.</li><li>In the interim we created a preprocessing script. It removes unsupported keywords from our header files temporarily (just for the execution).</li></ol><pre>sed -i ‘’ -e ‘s/ * @method [:_.[:alnum:]]*[:[:alnum:]]//g’ *.h<br>sed -i ‘’ -e ‘s/ * @typedef [:_.[:alnum:]]*[:[:alnum:]]//g’ *.h<br>sed -i ‘’ -e ‘s/ * @class [:_.[:alnum:]]*[:[:alnum:]]//g’ *.h<br>sed -i ‘’ -e ‘s/ * @fn [:_.[:alnum:]]*[:[:alnum:]]//g’ *.h<br>sed -i ‘’ -e ‘s/ * @property [:_.[:alnum:]]*[:[:alnum:]]//g’ *.h<br>sed -i ‘’ -e ‘s/ * @enum [:_.[\:alnum:]]*[:[:alnum:]]//g’ *.h<br>sed -i ‘’ -e ‘s/@abstract //g’ *.h<br>sed -i ‘’ -e ‘s/@brief //g’ *.h<br>sed -i ‘’ -e ‘s/@discussion //g’ *.h<br>sed -i ‘’ -e ‘s/@remarks //g’ *.h<br>sed -i ‘’ -e ‘s/@returns/Returns/g’ *.h<br>sed -i ‘’ -e ‘s/ @memberof [:_.[:alnum:]]*[:[:alnum:]]//g’ *.h<br>sed -i ‘’ -e ‘s/ @related[:_.[:alnum:]]*[:[:alnum:]]//g’ *.h<br>sed -i ‘’ -e ‘s/{@link \([:_.[:alnum:]]*[:[:alnum:]]\)}/&lt;code&gt;\1&lt;\/code&gt;/g’ *.h<br>sed -i ‘’ -e ‘s/@c \([:_.[:alnum:]]*[:[:alnum:]]\)/&lt;code&gt;\1&lt;\/code&gt;/g’ *.h</pre><p>You can keep reading the <a href="https://medium.com/@ulukaya/how-to-migrate-ios-api-reference-from-doxygen-to-jazzy-35bd89c6a92b">Migration Guide from Doxygen to Jazzy</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5bd297b9697d" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/how-to-generate-fabulous-api-reference-docs-for-ios-5bd297b9697d">How to generate fabulous API reference docs for iOS</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to migrate iOS API Reference from Doxygen to Jazzy]]></title>
            <link>https://medium.com/google-developers/how-to-migrate-ios-api-reference-from-doxygen-to-jazzy-35bd89c6a92b?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/35bd89c6a92b</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[ios-app-development]]></category>
            <category><![CDATA[swift]]></category>
            <dc:creator><![CDATA[Ibrahim Ulukaya]]></dc:creator>
            <pubDate>Tue, 25 Apr 2017 23:42:49 GMT</pubDate>
            <atom:updated>2017-04-25T23:44:52.301Z</atom:updated>
            <content:encoded><![CDATA[<p>… and start supporting Swift. At Firebase, <a href="https://medium.com/@ulukaya/how-to-generate-fabulous-api-reference-docs-for-ios-5bd297b9697d">we moved our iOS API reference docs from Doxygen to Jazzy</a>. Here are the issues we faced and the full migration guide.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_KBWvOI6HTmj_EHJ_BBQRg.jpeg" /><figcaption>Jazzy console output</figcaption></figure><h4>Add comments for all interfaces, protocols, properties, methods, enums, structs, and typedefs</h4><p>If you don’t have documentation for one of these elements, Jazzy will still include it in the output, marked as “undocumented”. You can use the --skip-documentation parameter with <em>jazzy</em> script to see which declarations in your public headers are missing documentation.</p><p><strong>Causes Jazzy Error</strong></p><pre>@property(nonatomic, copy) NSString *someProperty;</pre><p><strong>Fix</strong></p><pre>/// This is a comment for someProperty.<br>@property(nonatomic, copy) NSString *someProperty;</pre><p><strong>Alternate Fix</strong></p><pre>///<br>@property(nonatomic, copy) NSString *someProperty;</pre><h3>Use documentation tags supported by Jazzy</h3><p>Jazzy currently only supports the following documentation tags. Do not use @brief, @var, @enum, @protocol, @discussion, or @remarks as they are not currently supported by jazzy. To see the formatting for Quick Help in Xcode, press and hold the “option” key and click on the method name or property.</p><ol><li>@see</li><li>@param</li><li>@return</li><li>&lt;code&gt;&lt;/code creates a link to a constant, property, or method and should be used instead of @link (use &lt;code&gt;&lt;/code with @see)</li><li>&lt;pre&gt; instead of \code or @code</li><li>&lt;/pre&gt; instead of \endcode or @endcode</li></ol><p><strong>Fix</strong></p><pre>/// This is the description for a method.<br>///<br>/// @see &lt;code&gt;someProperty&lt;/code&gt;<br>/// @see &lt;code&gt;-someMethod:&lt;/code&gt;<br>///<br>/// &lt;pre&gt;<br>/// // This is a comment inside a code snippet.<br>/// // @code and @endcode are not currently supported by<br>/// // jazzy. Use &lt;pre&gt; and &lt;/pre&gt; instead.<br>/// NSString *someString = [[NSString alloc] init];<br>/// &lt;/pre&gt;<br>///</pre><pre>/// @param someParam1 This is a description for someParam1.<br>/// @param someParam2 This is a description for someParam2.<br>///<br>/// @return Description of the return value.<br>(NSString *)someMethodWithParam1:(NSString *)someParam1 <br>                          param2:(NSString *)someParam2;</pre><p><strong>Alternate Fix</strong></p><pre>/**<br> * This is the description for a method.<br> *<br> * @see &lt;code&gt;someProperty&lt;/code&gt;<br> * @see &lt;code&gt;-someMethod:&lt;/code&gt;<br> *<br> * &lt;pre&gt;<br> * // This is a comment inside a code snippet.<br> * // @code and @endcode are not currently supported by<br> * // jazzy. Use &lt;pre&gt; and &lt;/pre&gt; instead.<br> * NSString *someString = [[NSString alloc] init];<br> * &lt;/pre&gt;<br> *<br> * @param someParam1 This is a description for someParam1.<br> * @param someParam2 This is a description for someParam2.<br> *<br> * @return Description of the return value.<br> */<br>(NSString *)someMethodWithParam1:(NSString *)someParam1<br>                          param2:(NSString *)someParam2;</pre><h3>Add blank lines before and after the @param block</h3><p>There needs to be a newline between the last description paragraph and the first parameter block. There also needs to be a newline before the @return block. Otherwise the parameters and return value show up in both the description (badly formatted).</p><p><strong>Causes Jazzy Error</strong></p><pre>/**<br> * Use removeObserverWithHandle: to stop receiving updates.<br> * @param eventType The type of event to listen for.<br> * @param block The block that should be called with initial data.<br> * @return A handle used to unregister this block later.<br> */</pre><p><strong>Fix</strong></p><pre>/**<br> * Use removeObserverWithHandle: to stop receiving updates.<br> *<br> * @param eventType The type of event to listen for.<br> * @param block The block that should be called with initial data.<br> *<br> * @return A handle used to unregister this block later.<br> */</pre><p>Use #pragma mark instead of @name</p><p>Do not use @name tags, they show up with the */ characters appended to the end in the Jazzy built files. Use #pragma mark instead.</p><p><strong>Causes Jazzy Error</strong></p><pre>/** @name Attach observers to read data */</pre><p><strong>Fix</strong></p><pre>#pragma mark — Attach observers to read data</pre><h3>Use + instead of * in Markdown lists</h3><p>Do not use * Markdown syntax to create bulleted lists, they won’t show up as lists and the * characters are left behind, making things look like pointers.</p><p><strong>Causes Jazzy Error</strong></p><pre>/**<br> * To modify the data, set its value property to any of the native<br> * types support by Firebase Database:<br> * * NSNumber (includes BOOL)<br> * * NSDictionary<br> * * NSArray<br> * * NSString<br> * * nil / NSNull to remove the data<br> */</pre><p><strong>Fix</strong></p><pre>/**<br> * To modify the data, set its value property to any of the native<br> * types support by Firebase Database:<br> *<br> * + NSNumber (includes BOOL)<br> * + NSDictionary<br> * + NSArray<br> * + NSString<br> * + nil / NSNull to remove the data<br> */</pre><h3>Use HTML syntax for error-code lists</h3><p>If you don’t, Jazzy puts them all together on one line. Also need a — character between the error code name and the description.</p><p><strong>Causes Jazzy Error</strong></p><pre>@remarks Possible error codes:<br>- @c FIRAuthErrorCodeInvalidCustomToken Indicates a validation error with the custom token.<br>- @c FIRAuthErrorCodeCustomTokenMismatch Indicates the service account and the API key belong to different projects.</pre><p><strong>Fix</strong></p><pre>@remarks Possible error codes:<br>&lt;ul&gt;<br>  &lt;li&gt;@c FIRAuthErrorCodeInvalidCustomToken — Indicates a validation error with the custom token.&lt;/li&gt;<br>  &lt;li&gt;@c FIRAuthErrorCodeCustomTokenMismatch — Indicates the service account and the API key belong to different projects.&lt;/li&gt;<br>&lt;/ul&gt;</pre><p><strong>Note: </strong>the &lt;ul&gt; must not have an empty line between it and the line above or it will end up in a separate &lt;p&gt; tag and not show bullets.</p><h3>No @c tags inside @param or @return tags</h3><p>They break the build.</p><p><strong>Causes Jazzy Error</strong></p><pre>@param app The @c FIRApp for which to retrieve the associated @c FIRAuth instance.<br>@return The @c FIRAuth instance associated with the given @c FIRApp.</pre><p><strong>Fix</strong></p><pre>@param app The FIRApp for which to retrieve the associated FIRAuth instance.<br>@return The FIRAuth instance associated with the given FIRApp.</pre><h3>No smart quotes (“”)</h3><p>They get passed through by Jazzy and cause warnings when you add the generated docs to a CL.</p><p><strong>Causes Jazzy Error</strong></p><pre>“One account per email address”</pre><p><strong>Fix</strong></p><pre>&quot;One account per email address&quot;</pre><h3>Use nullable and nonnull in Objective-C public headers</h3><p>Audit Objective-C public headers with <a href="https://developer.apple.com/swift/blog/?id=25">nullability annotations</a> for interoperability with Swift code. Simple pointers that can be <em>nil</em> in Objective-C should be annotated as <em>nullable</em> so that they are mapped properly to optional types in Swift.</p><p><strong>Causes Swift to map everything to implicitly unwrapped optional</strong></p><pre>- (AAPLListItem *)itemWithName:(NSString *)name;</pre><p><strong>Fix</strong></p><pre>- (nullable AAPLListItem *)itemWithName:(nonnull NSString *)name;</pre><p>For more information, see <a href="https://developer.apple.com/swift/blog/?id=25">Nullability and Objective-C</a> in the Apple Developer Blog.</p><h3>Hide documentation for private/pre-release properties and methods with :nodoc:</h3><p>Properties and methods sometimes make it into header files before you’re ready to document them. To skip documenting these properties or methods, add :nodoc: to their comment, like so:</p><pre>/**<br> * A new authorization token that we’re not ready to document yet.<br> * :nodoc:<br> */<br>@property(nonatomic, copy) NSString *authToken;</pre><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=35bd89c6a92b" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/how-to-migrate-ios-api-reference-from-doxygen-to-jazzy-35bd89c6a92b">How to migrate iOS API Reference from Doxygen to Jazzy</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Build Your First Smart Bot For Google Home]]></title>
            <link>https://medium.com/google-developers/build-your-first-smart-bot-for-google-home-18949f74822c?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/18949f74822c</guid>
            <category><![CDATA[google-assistant]]></category>
            <category><![CDATA[bots]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[iot]]></category>
            <category><![CDATA[api]]></category>
            <dc:creator><![CDATA[Ido Green]]></dc:creator>
            <pubDate>Thu, 20 Apr 2017 15:28:16 GMT</pubDate>
            <atom:updated>2017-04-24T23:19:04.406Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<h3>Build Your First Assistant App For Google Home</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/325/0*g3Nl_RXKI823U4WC." /></figure><p>In the past few months, I heard someone smart saying that “the future is artificial intelligence first”.</p><p>Artificial intelligence, is making computers “smart” so they can think on their own and be even more helpful for us. It’s clear that Google, has been investing heavily in the areas of:</p><ul><li><strong>Machine learning</strong> — Teaching computers how to see patterns in data and act on it.</li><li><strong>Speech recognition</strong> and <strong>Language understanding</strong> — Meaning, being able to understand you when you are talking with all the little differences and nuance.</li></ul><p>These days we can see it all come together in the <a href="http://ift.tt/2dPw2iC">Google Assistant</a>. It allows you to have a conversation with Google and be more productive. In this post, we will see how it’s all working by building a new Action for Google home. In the same time, we will have a nice bot that in the future we will integrate with other services (e.g. Slack). <br>Cool?</p><h4><strong>What?</strong></h4><p>Google Home is a voice activated speaker that users keep in their home. The Google Assistant is the conversation between the users and Google. They can get things done by talking with the Assistant. There are many things users can do by just using the Assistant directly. To learn more about the assistant check out this short video below.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FFPfQMVf4vwQ%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DFPfQMVf4vwQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FFPfQMVf4vwQ%2Fhqdefault.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/a1b1fc5daa1387c0b589d17acb2fdfe3/href">https://medium.com/media/a1b1fc5daa1387c0b589d17acb2fdfe3/href</a></iframe><p><a href="http://ift.tt/2dOJcw2">Actions on Google</a> allows developers to extend the assistant. That is what we are going to focus on today in our <a href="http://ift.tt/2p3YD8t">animal joke</a> example. This post will walk you through creating your own Action on Google with API.AI.</p><p>We are going to use API.AI which is a conversational user experience platform, or in other words, it will help us ‘talk’ to machines in a way they will understand us better.</p><p>Let’s start from the end. Please click on the image below and play with our bot to see what is going on. You can start with something like: “please tell me a joke about a dog”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*y5vZ4xtFON8vEUDp." /></figure><h4>How does a conversation action works?</h4><p>The user needs to invoke your action. You say a phrase like “Ok Google, talk to Animal joke”. This tells Google the name of the action to talk to.</p><p>From this point onwards, the user is now talking to your conversation action. Your action generates dialog output, which then spoken to the user. The user then makes requests, your action processes it, and replies back again. The user has a two way dialog until the conversation is finished.</p><p>See below, if you like diagrams to ‘see’ what we explained above.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*z0e_id-TKA7FHvnX." /></figure><h4><strong>What is API.AI?</strong></h4><p><a href="http://api.ai">API.AI</a> let’s the machine understand what the user is trying to say, and can provide response. You type in example sentences of things that a user might speak.</p><p>You can specify what values you need to get from the user. It then uses machine learning to understand the sentences and manage the conversation.</p><p>Click the following <a href="https://console.api.ai/">link</a> to login to API.AI.</p><p>After the login you can create your first agent. You will need to:</p><ol><li>Give your agent a name. <br>In our case, it will be “AnimalJoker”. Please note that the agent name can not contain any spaces between the words.</li><li>Give a short description so other users will know what this action is going to do. <br>In our case, type: “An action that tells animal jokes. But only the good ones”.</li><li>Click on ‘Save’. <br>It’s the button in the top-right corner of the screen.</li></ol><h4><strong>What are entities?</strong></h4><p>Entities are the values we are trying to capture from the user phrases. Kind of like filling out a form, requesting details from the user. API.AI looks to extract these out, and will do follow up prompts until done. This is how an entity looks in API.AI</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*MXJeX6WFvJMDmawD." /></figure><p>We will create an Animal entity.</p><p>First step is to click on the ‘Create Entity’ button (it’s at the top-right corner).</p><p>Next you should start typing animals’ names.</p><p>The final results should look similar to the image below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*yD66YINTkWpBUobC." /></figure><p><strong>Things to remember:</strong></p><ol><li>You should ‘help’ API.AI machine learning algorithm to train itself by providing synonyms. For example, <strong>dog</strong> could also be <strong>puppy</strong>. In our case, you can give it only 2–3 animals. That will be fine for now.</li><li>In the real world, try to give many examples so it will cover more cases.</li></ol><h4><strong>What is an intent?</strong></h4><p>An Intent is triggered by a series of “user says” phrases. This could be something like “please tell me an animal joke” or “Give me a recipe for burger”.</p><p>You need to specify enough sentences to train API.AI’s machine learning algorithm. Then even if the user doesn’t say exactly the words you typed here, API.AI can still understand them!</p><p>You should create separate intents for different types of actions though.</p><p>Don’t try to combine all of them together.</p><p>In our example, we will create only two intents:</p><ul><li><strong>Tell_Joke intent</strong> — This intent will handle the jokes.</li><li><strong>Quit intent</strong> — This intent will handle the part when the user wish to finish the action.</li></ul><h4><strong>Build the “Tell_Joke” intent</strong></h4><p>After we have our new $Animal entity. If you notice the $ before the word — It’s not a mistake. This is the way we will refer to our new entity from now. Think of it as a special sign to show us that we are referring to our entity and not just another animal.</p><p>It’s time to create the intent that will tell us the jokes.</p><p>First, click on the ‘Create Intent’ button.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*S-IzMgCW_VaJLTrj." /></figure><p>Second, start typing few sentences that you will want to use to get a joke. For example, “please tell me a joke on dogs”. Type a few sentences so API.AI could start training its algorithms. You can see that while you type, API.AI automatically recognizes that the phrase includes one of the entities, so it highlights it.</p><p>See below how it should look like.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*2uN8aCC_kGfOu5V5." /></figure><p>Next, we are skipping the ‘events’ part, and in the ‘Action’ section we need to make sure that our @Animal entity is required and in the “user says” input line, we should type “Please tell me which animal you like” so in cases where the user didn’t name an animal, it will be clear to her that we need this entity.</p><p>Finally, in the ‘Text Response’ section we are filling our most amazing jokes. You can take few ideas from the image below.</p><p>Please note that we are using the <strong>$Animal</strong> value in our response in order to create a joke that is based on the animal that the user asked.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*P_8HV6CY2rzQj4b7." /></figure><p>After you fill all your amazing jokes, don’t forget to click on the ‘save’ button on the top-right corner of the screen.</p><h4><strong>Build the “Quit” intent</strong></h4><p>A good design principle is to allow our user to end the conversation.</p><p>You should click again on ‘Create Intent’ button. Than, start typing few sentences that will end the conversation. For example, “bye bye” or “bye animal joker”.</p><p>Below is how this intent should look like.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*ShvKhJQc5yxI69Xm." /></figure><p>Last, but not least, you need to check the ‘end conversation’ checkbox so that it will know to really end the conversation at this point.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/684/0*pn3-c2lvNfL1YW9F." /></figure><p>We are almost done!</p><p>Btw, if you wish to load all these definition without following this tutorial step by step, you can do it with 3 simple steps.</p><p>See the image below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*QhCs7jDU7tm4wF0y." /></figure><p>Once you import everything from the zip file, you can start to add or edit more intent and entities.</p><h4>Testing</h4><p>Click on ‘Integrations’ in the right side menu. This will open the Agent page with all the options to integrate it with other services (e.g. chat apps, twitter etc’).</p><p>You have two easy and quick ways to test your creation. one is to follow the link that you see under ‘Agent Page’. But don’t forget to set the ‘Publish’ switch before you click on the link to your new bot.</p><p>Another way is to click on “Actions on Google” box under “One-click integration”. This will enable you to test your work as it will be running on Google Home.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*2LaZeNfUwzdLQP5I." /></figure><p>Once you click on ‘Actions on Google’ you will see this dialog:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*uugEwvUJmDvH4Ed7." /></figure><p>Fill the invocation name and click on ‘<strong>Preview</strong>’ button.</p><p>You will get a screen that let’s you talk with the simulator. See the image below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/0*EUexIe4bjIeq0N_x." /></figure><p>The cool aspect of the <a href="http://ift.tt/2hCeS4L">web simulator</a> is that it will give you all the answers both in english as text/sounds and on the right side, the full JSON object.</p><h4>Best Practices</h4><ul><li>There are some <a href="http://ift.tt/2od1rfp">policies</a> about what a Conversation Action can be named and support.</li><li>Only Conversation Actions, with a well defined invocation, are supported. Meaning, the trigger sentence should be clear and short.</li><li>The <a href="http://ift.tt/2od1rfp">guidelines</a> explain all the rules around them.</li></ul><p>In the next post I’ll we will go a bit deeper with <a href="http://ift.tt/2c57nAl">webhooks</a> and the ability to use your own engines in order to provide answer with useful information.</p><p>Be strong and build amazing actions!</p><p><em>Originally published on </em><a href="http://ift.tt/2p4aVgX"><em>Ido Green</em></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=18949f74822c" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/build-your-first-smart-bot-for-google-home-18949f74822c">Build Your First Smart Bot For Google Home</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Convert a full ObjectiveC app to Swift]]></title>
            <link>https://medium.com/google-developers/migrating-from-objective-c-to-swift-googlecast-reference-app-81030ce814ce?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/81030ce814ce</guid>
            <category><![CDATA[objective-c]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[xcode]]></category>
            <dc:creator><![CDATA[Ibrahim Ulukaya]]></dc:creator>
            <pubDate>Mon, 03 Apr 2017 14:39:23 GMT</pubDate>
            <atom:updated>2017-04-21T16:21:41.273Z</atom:updated>
            <content:encoded><![CDATA[<h4>At GoogleCast, we migrated <a href="https://github.com/googlecast/CastVideos-ios">a reference iOS app</a> to Swift using an app, compiler, simple rules and a linter. Here’s what we learned.</h4><p>Recently <a href="https://twitter.com/ToddKerpelman">@ToddKerpelman</a> and I converted <a href="https://github.com/googlecast/CastVideos-ios">a reference iOS Google Cast app</a> to Swift. Before, I implemented <a href="https://github.com/firebase/quickstart-ios">Firebase iOS quickstarts</a>. I created both ObjectiveC and Swift target at the same time. But I never tried to migrate a full app to Swift before. So I welcomed this challenge.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7BBCFwDqT9uGH4KGuB-EZQ.jpeg" /></figure><p>Blame me being lazy. I didn’t want to do to mundane work of declaring each variable and function in Swift. I decided to take a leap in faith and try a converter tool as a starter. I was going to see how good it can translate. Then I was going to catch build errors and bugs after checking line by line. Finally I was going through with a linter tool to catch any bad style.</p><p>After googling “ObjectiveC to Swift converter” I decided to give a try to <a href="https://objectivec2swift.com/">Swiftify</a>. I did a test on a small file, it looked good. Then I uploaded the whole project, pressed convert button, closed my eyes and hoped for the best.</p><p>What I received looked like a Swift code. There were double definitions of the variables as they got imported from both .h and .m. Some of the definitions that should’ve been on the top of the file were on the bottom. But in general it looked like I saved tons of hours of mundane work to fix dots and parentheses. Because of the original code, I even saw definitions like private(set) public var. We started with 269 build errors (more like 800 as you fix, you get more :). Luckily there were lots of low hanging fruits:</p><ul><li>double definitions of vars</li><li>wrong optionality, function signatures</li><li>Swift 3 style issues (enumeration naming)</li><li>Xcode suggested fixes (nullability checks).</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xuKb9VP_OPIUQ1MZeg96vw.png" /><figcaption>Build Errors</figcaption></figure><p>This took 3 days as we go down all the way to 2 digits. Now the no-brainer bugs were gone. We had to use assistant view to fix remaining bugs side by side. Most of the bugs turned out to be wrong guesses on the optionality types of class vars. Converter always picks implicitly unwrapped variables instead of optionals. Or it would try to guess local variable types wrong. (You don’t need to declare variable types most of the times as Swift can infer them). Only because I was eager to see the app built, I took a shortcut. I used force unwrapping and force casting only to get rid of build errors. Little did I know, I was going to pay a high price for this later on debugging. After bunch of var to let conversions (converter defined all variables as var), we had an app that built!</p><p>Don’t get so hyped yet, we were only beginning. Yes, our app was building, but returning no data and crashing at startup. We spent the next 100 hours debugging to find all the</p><ul><li>null errors</li><li>wrong if not null checks,</li><li>and lots of optionality fixes.</li></ul><p>After those, our app was initializing. It was “almost” casting, yet another force unwrapping error.</p><p>There is a reason for style guides. They help you to use the language the way it was designed to be used, and they prevent most bugs before you create them. At that moment <a href="https://github.com/realm/SwiftLint/">SwiftLint</a> came to rescue. You can run SwiftLint on Xcode to see all the style guide exceptions on the lines they occur, and fix them quick. I was determined to get rid of all optionality bugs and stay close to style guide as much as possible. I enabled even the opt-in rules. Fixing style not only helped the readability, but also improved our confidence on the errors. We found further bugs on the way. We got rid of all the force unwrapping, force casting, explicit type declarations, implicitly unwrapped optionals. We favored if let and guard let statements wherever possible.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*49s3dykYY8bvFmnTidv5aw.png" /><figcaption>Force unwrapping violations</figcaption></figure><h3>Results:</h3><p>We will be refactoring the app in next weeks, but at least, we have a <a href="https://github.com/googlecast/CastVideos-ios/tree/master/CastVideos-swift">1-to-1 swift port of the CastVideos sample app now.</a></p><ol><li><a href="https://objectivec2swift.com/">Swiftify</a> saved us valuable time instead of converting the project line by line. (As long as you now it’s not going to do the all work for you. It’ll be a guesstimate)</li><li>Never ever use ! (unwrapping symbol) unless you have no other choice. Force unwrapping myVar!, force casting as!, implicitly unwrapped optionals Type! are very prone to errors.</li><li>Replacing them with if let, guard let, and optional chaining x?.y?.do() will recover your app from exceptions instead of crashing and will help you in debugging.</li><li>Style guides both improves readability and make your code less prone to bugs. <a href="https://github.com/realm/SwiftLint">SwiftLint</a> was a great tool to force the Swift style guide.</li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=81030ce814ce" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/migrating-from-objective-c-to-swift-googlecast-reference-app-81030ce814ce">How to Convert a full ObjectiveC app to Swift</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[SSML for Actions on Google]]></title>
            <link>https://medium.com/google-developers/ssml-for-actions-on-google-946117f97fd1?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/946117f97fd1</guid>
            <category><![CDATA[actions-on-google]]></category>
            <category><![CDATA[ssml]]></category>
            <category><![CDATA[google-assistant]]></category>
            <dc:creator><![CDATA[Leon Nicholls]]></dc:creator>
            <pubDate>Tue, 18 Apr 2017 17:26:05 GMT</pubDate>
            <atom:updated>2017-04-18T17:26:05.862Z</atom:updated>
            <content:encoded><![CDATA[<p>An important part of designing great conversation actions for the <a href="https://assistant.google.com/">Google Assistant</a> is thinking about how you want them to feel and sound. If you’re creating a fun game, you might want to use a whimsical tone. If you’re building a news reader, you might want to use a more deliberate, serious tone.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/211/1*mjYVa8cN3Hf4sFn8ShFY_w.png" /></figure><p><a href="https://developers.google.com/actions/">Actions on Google</a> lets you add audio to actions, which gives dimension to dialogs and a sense of atmosphere to the overall user experience.</p><h3>&lt;SPEAK&gt;</h3><p>To play audio as part dialogs, actions support the <a href="https://developers.google.com/actions/reference/ssml">Speech Synthesis Markup Language</a> (SSML), a standard markup language for the generation of synthetic speech.</p><p>To use the SSML markup, start by wrapping your prompts inside <em>&lt;speak&gt;</em> tags:</p><pre>&lt;speak&gt;Welcome to Number Genie!&lt;/speak&gt;</pre><p>Then use other SSML tags to add sounds and control the audio rendering. For example, to play an audio file:</p><pre>&lt;speak&gt;&lt;audio src=&quot;<a href="https://.../meow.ogg">https://.../meow.ogg</a>&quot;&gt;&lt;/audio&gt;&lt;/speak&gt;</pre><p>Since SSML is based on XML, special characters need to use XML escaping:</p><pre>&lt;speak&gt;&amp;quot;What have I learned?&amp;quot; he asked.&lt;/speak&gt;</pre><h3>Take a Break</h3><p>When designing a conversational flow, don’t just consider the words but also the pace of the dialog. This is especially important when you have a design that calls for something more wordy.</p><p>For our <a href="https://medium.com/@leonnicholls/interactive-fiction-actions-part-2-2eb83b66cac3">Interactive Fiction actions</a>, we added SSML <em>&lt;break&gt;</em> tags between the sentences to better pace the story telling:</p><pre>&lt;speak&gt;Grunk think that pig probably go this way.&lt;break time=&quot;800ms&quot;/&gt;It hard to tell at night time, because moon not bright as sun.&lt;break time=&quot;800ms&quot;/&gt;There forest to east and north.&lt;/speak&gt;</pre><p>An important aspect of our <a href="https://developers.google.com/actions/design/principles">conversational design principles</a> is to test your dialogs before you implement them, to get a feel for how they’ll sound for end users. Try reading them out loud to your colleagues, or use our <a href="https://developers.google.com/actions/tools/web-simulator">web simulator</a>. Keep tweaking the SSML markup values until you have the pacing just right.</p><h3>The Cow Says Moooooo</h3><p>Sound effects (SFX) are a very easy way to raise the production value of your action, especially when you implement games.</p><p>In our <a href="https://github.com/actions-on-google/apiai-number-genie-nodejs">Number Genie</a> action, we use sounds to give users fun feedback to let them know how well they are doing during the game:</p><ul><li>A cold wind sound when the guess is very far from the answer,</li><li>A steam sound when the user is within 3 of the answer,</li><li>A steam sound with bells when the user is even closer,</li><li>A congratulatory sound when the user guesses the answer.</li></ul><p>But where can you get sounds for your action? Well, the <a href="https://www.youtube.com/audiolibrary/soundeffects">YouTube audio library</a> provides over 5,000 free sounds that you can use in your own projects. We’ve picked our favorite short sounds for the actions <a href="https://developers.google.com/actions/tools/sound-library">sound library</a> and hosted them for you on Google’s servers so you can reference them in your actions:</p><pre>&lt;speak&gt;<br>&lt;audio  <br>  src=&quot;<a href="https://actions.google.com/sounds/v1/alarms/alarm_clock.ogg">https://actions.google.com/sounds/v1/alarms/alarm_clock.ogg</a>&quot;&gt;<br>&lt;/audio&gt;<br>&lt;/speak&gt;</pre><h3>P-R-O-N-U-N-C-I-A-T-I-O-N</h3><p>In addition to supporting audio playback, SSML also lets you have more fine-grained control over how your prompts are pronounced, making your action’s responses seem more life-like and appropriate for the kind of information provided to the user.</p><p>In particular, when you have to say numbers or dates, you can specify how you want to want the data to be interpreted. For example, if you want to say “12345” as “Twelve thousand three hundred forty five”:</p><pre>&lt;speak&gt;&lt;say-as interpret-as=&quot;cardinal&quot;&gt;12345&lt;/say-as&gt;&lt;/speak&gt;</pre><p><a href="https://developers.google.com/actions/reference/ssml#support_for_ssml_elements">Other interpretations</a> for numbers, characters, dates, times and telephone numbers are also supported.</p><h3>Let’s Play</h3><p>Now we can bring this all together by designing our own trivia game action. We’ll be using many of the SSML features to create the mood and SFX of a typical game show.</p><p>For our persona we want a game show host so we pick the voice, “male 2”, from our <a href="https://developers.google.com/actions/design/principles#picking_your_voice">list of voices</a> available for actions.</p><p>Now, design the greeting for users of the action:</p><pre>&lt;speak&gt;<br>&lt;audio src=&quot;<a href="https://.../game_intro.ogg">https://.../game_intro.ogg</a>&quot;/&gt;<br>Let’s play the SSML Trivia Game!<br>Put on your game face.<br>Here comes your first question.<br>&lt;break time=&quot;500ms&quot;/&gt;<br>Which one of these is the world’s tallest waterfall?<br>&lt;break time=&quot;500ms&quot;/&gt;<br>Angel Falls<br>&lt;break time=&quot;500ms&quot;/&gt;<br>Victoria Falls<br>&lt;break time=&quot;500ms&quot;/&gt;<br>Or Niagara Falls<br>&lt;audio src=&quot;<a href="https://.../ding.ogg">https://.../ding.ogg</a>&quot;/&gt;<br>&lt;/speak&gt;</pre><p>Note the use of the ‘ding’ sound to make it clear to the user that the question is complete and it’s the users turn. This is an example of an <a href="https://en.wikipedia.org/wiki/Earcon">earcon</a> <a href="https://en.wikipedia.org/wiki/Earcon">(</a>t<a href="https://en.wikipedia.org/wiki/Earcon">h</a>i<a href="https://en.wikipedia.org/wiki/Earcon">n</a>k<a href="https://en.wikipedia.org/wiki/Earcon"> </a>‘<a href="https://en.wikipedia.org/wiki/Earcon">i</a>c<a href="https://en.wikipedia.org/wiki/Earcon">o</a>n<a href="https://en.wikipedia.org/wiki/Earcon">’</a> <a href="https://en.wikipedia.org/wiki/Earcon">f</a>o<a href="https://en.wikipedia.org/wiki/Earcon">r</a> <a href="https://en.wikipedia.org/wiki/Earcon">e</a>a<a href="https://en.wikipedia.org/wiki/Earcon">r</a>s<a href="https://en.wikipedia.org/wiki/Earcon">)</a> which are distinct sounds that provide feedback or convey additional information.</p><p>Once the user provides an answer, the action can further recreate the ambiance expected for a typical game-show with an audience-reaction sound:</p><pre>&lt;speak&gt;<br>&lt;audio src=&quot;<a href="https://.../audience_reaction_correct.ogg">https://.../audience_reaction_correct.ogg</a>&quot;/&gt;<br>You called it. Great job!<br>Here’s the next question.<br>...<br>&lt;/speak&gt;</pre><p>If you are using <a href="https://api.ai">API.AI</a> to develop your action, then you can use SSML in the text responses when creating an intent:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/631/0*Iqo-liR_GwkoTvW3." /></figure><p>Or if you use fulfillment to dynamically generate the responses in code, then you can use SSML with the <a href="https://developers.google.com/actions/tools/nodejs-client-library">Node.js client library</a> “ask” or “tell” methods:</p><pre>assistant.tell(&#39;&lt;speak&gt;OK. See you next time!<br>   &lt;audio src=&quot;<a href="https://.../bye_sound.ogg">https://.../bye_sound.ogg</a>&quot;/&gt;&lt;/speak&gt;&#39;);</pre><p>Make sure you wrap the <em>&lt;speak&gt;</em> tags around the entire string, and not just a subset of it.</p><h3>Next Steps</h3><p>I’ll leave it to you to design the rest of the game show dialogs. Start with the <a href="https://developers.google.com/actions/design/walkthrough#journey_1_happy_path">happy path</a> and keep adding support for other typical user interactions. Use sound to delight and entertain your users!</p><p>After you have confirmed your action meets the guidelines in the Actions on Google <a href="https://developers.google.com/actions/design/checklist">design checklist</a>, <a href="https://developers.google.com/actions/distribute/deploy">submit</a> your action so that everybody can enjoy your fun action.</p><p><em>Thanks to </em><a href="https://medium.com/@stocker"><em>Nandini Stocker</em></a><em>, Google’s Conversation Design Lead, for co-authoring this post.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=946117f97fd1" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/ssml-for-actions-on-google-946117f97fd1">SSML for Actions on Google</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Run TravisCI locally on Docker]]></title>
            <link>https://medium.com/google-developers/how-to-run-travisci-locally-on-docker-822fc6b2db2e?source=rss----2e5ce7f173a5---4</link>
            <guid isPermaLink="false">https://medium.com/p/822fc6b2db2e</guid>
            <category><![CDATA[github]]></category>
            <category><![CDATA[travis-ci]]></category>
            <category><![CDATA[git]]></category>
            <category><![CDATA[continuous-integration]]></category>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Ibrahim Ulukaya]]></dc:creator>
            <pubDate>Wed, 19 Apr 2017 21:41:40 GMT</pubDate>
            <atom:updated>2017-04-20T15:03:39.671Z</atom:updated>
            <content:encoded><![CDATA[<p>Have a private Github repo that you don’t want Travis to access? Scratching your head on a build error but can’t see logs? Build it locally! (<a href="https://medium.com/google-developers/hacks-i-did-to-use-travis-ci-with-firebase-ios-quickstarts-da67c4986f29">Interested in TravisCI configuration</a>?)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qfR9N8vMEPRye8f473tYpA.jpeg" /></figure><ol><li><a href="https://docs.docker.com/docker-for-mac/install/">Install Docker</a></li><li>Install Travis on Docker</li></ol><pre># choose the image according to the language chosen in .travis.yml<br>$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash</pre><pre># now that you are in the docker image, switch to the travis user<br>sudo — travis</pre><pre># Install a recent ruby (default is 1.9.3)<br>rvm install 2.3.0<br>rvm use 2.3.0</pre><pre># Install travis-build to generate a .sh out of .travis.yml<br>cd builds \<br>git clone <a href="https://github.com/travis-ci/travis-build.git">https://github.com/travis-ci/travis-build.git</a> \<br>cd travis-build \<br>gem install travis \<br>travis # to create ~/.travis \<br>ln -s `pwd` ~/.travis/travis-build \<br>bundle install</pre><pre># Create ssh key for Github<br>ssh-keygen -t rsa -b 4096 -C “YOUR EMAIL REGISTERED IN GITHUB”</pre><pre># Click enter to use default location for key<br># You can choose empty passphrase by clicking enter twice</pre><pre># Now that we have the key, let’s share with Github<br>less ~/.ssh/id_rsa.pub</pre><pre># Copy the contents of the id_rsa.pub<br></pre><p>3. Go to your <a href="https://github.com/settings/keys">Github SSH key settings</a><br>4. Create a new ssh key with title: “docker key”: “PASTE THE KEY CONTENTS HERE”<br>5. Go back to docker terminal</p><pre># Create project dir, assuming your project is `AUTHOR/PROJECT` on GitHub<br>cd ~/builds \<br>mkdir AUTHOR \<br>cd AUTHOR \<br>git clone <a href="mailto:git@github.com">git@github.com</a><a href="https://github.com/AUTHOR/PROJECT.git">:AUTHOR/PROJECT.git</a> \<br>cd PROJECT</pre><pre># change to the branch or commit you want to investigate<br># compile travis script into bash script<br>travis compile &gt; ci.sh</pre><pre># Go to bash script and fix the branch name<br>vi ci.sh</pre><pre># in Vi type “/branch” to search and add the right branch name<br># — branch\=\’\NEW_BRANCH’\</pre><pre># You most likely will need to edit ci.sh as it ignores ‘matrix’ and ‘env’ keywords<br>bash ci.sh</pre><p>Congrats, in few steps you are already running TravisCI locally.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=822fc6b2db2e" width="1" height="1"><hr><p><a href="https://medium.com/google-developers/how-to-run-travisci-locally-on-docker-822fc6b2db2e">How to Run TravisCI locally on Docker</a> was originally published in <a href="https://medium.com/google-developers">Google Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>