Posted by Vikram Tank (Product Manager), Coral Team
Coral’s had a busy summer working with customers, expanding distribution, and building new features — and of course taking some time for R&R. We’re excited to share updates, early work, and new models for our platform for local AI with you.
The compiler has been updated to version 2.0, adding support for models built using post-training quantization—only when using full integer quantization (previously, we required quantization-aware training)—and fixing a few bugs. As the Tensorflow team mentions in their Medium post “post-training integer quantization enables users to take an already-trained floating-point model and fully quantize it to only use 8-bit signed integers (i.e. `int8`).” In addition to reducing the model size, models that are quantized with this method can now be accelerated by the Edge TPU found in Coral products.
We've also updated the Edge TPU Python library to version 2.11.1 to include new APIs for transfer learning on Coral products. The new on-device back propagation API allows you to perform transfer learning on the last layer of an image classification model. The last layer of a model is removed before compilation and implemented on-device to run on the CPU. It allows for near-real time transfer learning and doesn’t require you to recompile the model. Our previously released imprinting API, has been updated to allow you to quickly retrain existing classes or add new ones while leaving other classes alone. You can now even keep the classes from the pre-trained base model. Learn more about both options for on-device transfer learning.
Until now, accelerating your model with the Edge TPU required that you write code using either our Edge TPU Python API or in C++. But now you can accelerate your model on the Edge TPU when using the TensorFlow Lite interpreter API, because we've released a TensorFlow Lite delegate for the Edge TPU. The TensorFlow Lite Delegate API is an experimental feature in TensorFlow Lite that allows for the TensorFlow Lite interpreter to delegate part or all of graph execution to another executor—in this case, the other executor is the Edge TPU. Learn more about the TensorFlow Lite delegate for Edge TPU.
Coral has also been working with Edge TPU and AutoML teams to release EfficientNet-EdgeTPU: a family of image classification models customized to run efficiently on the Edge TPU. The models are based upon the EfficientNet architecture to achieve the image classification accuracy of a server-side model in a compact size that's optimized for low latency on the Edge TPU. You can read more about the models’ development and performance on the Google AI Blog, and download trained and compiled versions on the Coral Models page.
And, as summer comes to an end we also want to share that Arrow offers a student teacher discount for those looking to experiment with the boards in class or the lab this year.
We're excited to keep evolving the Coral platform, please keep sending us feedback at coral-support@google.com.
Posted by Erica Hanson, Google Developer Relations
This spring, Google and Developer Student Clubs are looking for new passionate student leaders from universities across the globe!
Developer Student Clubs is a program with Google Developers. Through in-person meetups, university students are empowered to learn together and use technology to solve real life problems with local businesses and start-ups.
Less than two years ago, DSC launched in parts of Asia and Africa where 90,000+ students have been trained on Google technologies; 500+ solutions built for 200+ local startups and organizations and 170+ clubs participated in our first Solution Challenge!
Bridging the gap between theory and practical application, Google aims to provide student developers with the resources, opportunities and the experience necessary to be more industry ready.
You may be wondering what the benefit of being a Developer Student Club Lead is? Well, here are a few reasons:
Apply to be a Developer Student Club Lead at g.co/dev/dsc.
Deadline to submit applications has been extended to June 15th.
Posted by the Flutter and Chrome OS teams
Chrome OS is the fast, simple, and secure operating system that powers Chromebooks, including the Google Pixelbook and millions of devices used by consumers and students every day. The latest Flutter release adds support for building beautiful, tailored Chrome OS applications, including rich support for keyboard and mouse, and tooling to ensure that your app runs well on a Chromebook. Furthermore, Chrome OS is a great developer workstation for building general-purpose Flutter apps, thanks to its support for developing and running Flutter apps locally on the same device.
Since its inception, Flutter has shared many of the same principles as Chrome OS: productive, fast, and beautiful experiences. Flutter allows developers to build beautiful, fast UIs, while also providing a high degree of developer productivity, and a completely open-source engine, framework and tools. In short, it’s the ideal modern toolkit for building multi-platform apps, including apps for Chrome OS.
Flutter initially focused on providing a UI toolkit for building apps for mobile devices, which typically feature touch input and small screens. However, we’ve been building keyboard and mouse support into Flutter since before our 1.0 release last December. And today, we’re pleased to announce that Flutter for Chrome OS is now stronger with scroll wheel support, hover management, and better keyboard event support. In addition, Flutter has always been great at allowing you to build apps that run at any size (large screen or small), with seamless resizing, as shown here in the Chrome OS Best Practices Sample:
The Chrome OS best practices sample in action
The Chrome OS Hello World sample is an app built with Flutter that is optimized for Chrome OS. This includes a responsive UI to showcase how to reposition items and have layouts that respond well to changes in size from mobile to desktop.
Because Chrome OS runs Android apps, targeting Android is the way to build Chrome OS apps. However, while building Chrome OS apps on Android has always been possible, as described in these guidelines, it’s often difficult to know whether your Android app is going to run well on Chrome OS. To help with that problem, today we are adding a new set of lint rules to the Flutter tooling to catch violations of the most important of the Chrome OS best practice guidelines:
The Flutter Chrome OS lint rules in action
When you’re able to put these Chrome OS lint rules in place, you’ll quickly be able to see any problems in your Android app that would hamper it when running on Chrome OS. To learn how to take advantage of these rules, see the linting docs for Flutter Chrome OS.
But all of that is just the beginning -- the Flutter tools allow you to develop and test your apps directly on Chrome OS as well.
No matter what platform you're targeting, Flutter has support for rich IDEs and programming tools like Android Studio and Visual Studio Code. Over the last year, Chrome OS has been building support for running the Linux version of these tools with the beta of Linux on Chrome OS (aka Crostini). And, because Chrome OS also supports Android natively, you can configure the Flutter tooling to run your Android apps directly without an emulator involved.
The Flutter development tools running on Chrome OS
All of the great productivity of Flutter is available, including Stateful Hot Reload, seamless resizing, keyboard and mouse support, and so on. Recent improvements in Crostini, such as high DPI support, Crostini file system integration, easier adb, and so on, have made this experience even better! Of course, you don’t have to test against the Android container running on Chrome OS; you can also test against Android devices attached to your Chrome OS box. In short, Chrome OS is the ideal environment in which to develop and test your Flutter apps, especially when you’re targeting Chrome OS itself.
With its unique combination of simplicity, security, and capability, Chrome OS is an increasingly popular platform for enterprise applications. These apps often work with large quantities of data, whether it’s a chart, or a graph for visualization, or lists and forms for data entry. The support in Flutter for high quality graphics, large screen layout, and input features (like text selection, tab order and mousewheel), make it an ideal way to port mobile applications for the enterprise. One purveyor of such apps is AppTree, who use Flutter and Chrome OS to solve problems for their enterprise customers.
“Creating a Chrome OS version of our app took very little effort. In 10 minutes we tweaked a few values and now our users have access to our app on a whole new class of devices. This is a huge deal for our enterprise customers who have been wanting access to our app on Desktop devices.”
By using Flutter to target Chrome OS, AppTree was able to start with their existing Flutter mobile app and easily adapt it to take advantage of the capabilities of Chrome OS.
If you’d like to target Chrome OS with Flutter, you can do so today simply by installing the latest version of Flutter. If you’d like to run the Flutter development tools on Chrome OS, you can follow these instructions to get started fast. To see a real-world app built with Flutter that has been optimized for Chrome OS, check out the the Developer Quest sample that the Flutter DevRel team launched at the 2019 Google I/O conference. And finally, don’t forget to try out the Flutter Chrome OS linting rules to make sure that your Chrome OS apps are following the most important practices.
Flutter and Chrome OS go great together. What are you going to build?
Posted by Roy Glasberg, Founder of Launchpad Accelerator
For the past six years, Launchpad has connected startups from around the world with the best of Google - its people, network, methodologies, and technologies. We have worked with market leaders in over 40 countries across 6 regional programs (San Francisco, Brazil, Africa, Israel, India, and Tokyo). Launchpad also includes a new program in Mexico announced earlier this year, along with our Indie Games Accelerator and Google.org AI for Social Good Accelerator programs.
We are pleased to announce that the next cohort of startups has been selected for our upcoming programs in Africa, Brazil, and India. We reviewed over 1,000 applications for these programs, and were thoroughly impressed with the quality of startups that indicated their interest. The startups chosen represent those using technology to create a positive impact on key industries in their region and we look forward to supporting them and connecting them with startup ecosystems around the world.
In Africa, we have selected 12 startups from 6 African countries for our 3rd class in this region:
In India, for our 2nd class, we are focused on seed to growth-stage startups that operate across a number of sectors using ML and AI to solve for India-specific problems:
In Brazil, we have chosen startups that are applying ML in interesting ways and are solving for local challenges.
Applications are still open for Launchpad Accelerator Mexico - if you are a LATAM-based startup using technology to solve big challenges for that region, please apply to the program here.
As with all of our previous regional classes, these startups will benefit from customized programs, access to partners and mentors on the ground, and Google's support and dedication to their success.
Stay updated on developments and future opportunities by subscribing to the Google Developers newsletter, as well as The Launchpad Blog.
With ARCore and Google Lens, we're working to make smartphone cameras smarter. ARCore enables developers to build apps that can understand your environment and place objects and information in it. Google Lens uses your camera to help make sense of what you see, whether that's automatically creating contact information from a business card before you lose it, or soon being able to identify the breed of a cute dog you saw in the park. At Mobile World Congress, we're launching ARCore 1.0 along with new support for developers, and we're releasing updates for Lens and rolling it out to more people.
ARCore, Google's augmented reality SDK for Android, is out of preview and launching as version 1.0. Developers can now publish AR apps to the Play Store, and it's a great time to start building. ARCore works on 100 million Android smartphones, and advanced AR capabilities are available on all of these devices. It works on 13 different models right now (Google's Pixel, Pixel XL, Pixel 2 and Pixel 2 XL; Samsung's Galaxy S8, S8+, Note8, S7 and S7 edge; LGE's V30 and V30+ (Android O only); ASUS's Zenfone AR; and OnePlus's OnePlus 5). And beyond those available today, we're partnering with many manufacturers to enable their upcoming devices this year, including Samsung, Huawei, LGE, Motorola, ASUS, Xiaomi, HMD/Nokia, ZTE, Sony Mobile, and Vivo.
Making ARCore work on more devices is only part of the equation. We're bringing developers additional improvements and support to make their AR development process faster and easier. ARCore 1.0 features improved environmental understanding that enables users to place virtual assets on textured surfaces like posters, furniture, toy boxes, books, cans and more. Android Studio Beta now supports ARCore in the Emulator, so you can quickly test your app in a virtual environment right from your desktop.
Everyone should get to experience augmented reality, so we're working to bring it to people everywhere, including China. We'll be supporting ARCore in China on partner devices sold there— starting with Huawei, Xiaomi and Samsung—to enable them to distribute AR apps through their app stores.
We've partnered with a few great developers to showcase how they're planning to use AR in their apps. Snapchat has created an immersive experience that invites you into a "portal"—in this case, FC Barcelona's legendary Camp Nou stadium. Visualize different room interiors inside your home with Sotheby's International Realty. See Porsche's Mission E Concept vehicle right in your driveway, and explore how it works. With OTTO AR, choose pieces from an exclusive set of furniture and place them, true to scale, in a room. Ghostbusters World, based on the film franchise, is coming soon. In China, place furniture and over 100,000 other pieces with Easyhome Homestyler, see items and place them in your home when you shop on JD.com, or play games from NetEase, Wargaming and Game Insight.
With Google Lens, your phone's camera can help you understand the world around you, and, we're expanding availability of the Google Lens preview. With Lens in Google Photos, when you take a picture, you can get more information about what's in your photo. In the coming weeks, Lens will be available to all Google Photos English-language users who have the latest version of the app on Android and iOS. Also over the coming weeks, English-language users on compatible flagship devices will get the camera-based Lens experience within the Google Assistant. We'll add support for more devices over time.
And while it's still a preview, we've continued to make improvements to Google Lens. Since launch, we've added text selection features, the ability to create contacts and events from a photo in one tap, and—in the coming weeks—improved support for recognizing common animals and plants, like different dog breeds and flowers.
Smarter cameras will enable our smartphones to do more. With ARCore 1.0, developers can start building delightful and helpful AR experiences for them right now. And Lens, powered by AI and computer vision, makes it easier to search and take action on what you see. As these technologies continue to grow, we'll see more ways that they can help people have fun and get more done on their phones.
We're excited to announce the three new startups joining Launchpad Studio, our 6-month mentorship program tailored to help applied-machine learning startups build great products using the most advanced tools and technologies available. We intend to support these startups by leveraging some of our platforms like Google Cloud Platform, TensorFlow, and Android, while also providing one-on-one support from product and research experts from several Google teams including Google Cloud, Verily, X, Brain, and ML Research. Launchpad Studio has also enlisted the expertise of a number of top industry practitioners and thought leaders to ensure Studio startups are successful over the long-term. These three startups were selected based on the novel ways they've applied ML to important challenges in the Healthcare industry:
The annual cost of treating heart failures in the US is currently estimated to be ~$40bn annually. With the continued aging of the US population, the impact of Congestive Heart Failure is expected to increase substantially.
Through light-weight, low-cost cloth-based form factors, Nanowear can capture and transmit medical-grade data directly from the skin enabling deep analytics and prescriptive recommendations. As a first product application, Nanowear's SimpleSense aims to transform Congestive Heart Failure management.
Nanowear intends to develop predictive models that provide both physicians and patients with leading indicators and data to anticipate potential hospitalizing events. Combining these datasets with deep machine learning capabilities will position Nanowear at the epicenter of the next generation of telemedicine and connected-self healthcare.
With the big data revolution, the medical and scientific communities have more information to work with than in all of history combined. However, with such a wealth of information, it is increasingly difficult to differentiate productive leads from dead ends.
Artificial intelligence and machine learning powered by systems biology can organize, validate, predict and compare the overabundance of information. Owkin builds mathematical models and algorithms that can interpret omics, visual data, biostatistics and patient profiles.
Owkin is focused on federated learning in healthcare to overcome the data sharing problem, building collective intelligence from distributed data.
A low percentage of healthcare specialists per patient and no interoperability between medical devices causes exam results in Brazil to take an average of 60 days to be ready, cost hundreds of dollars, and leaves millions of people with no access to quality healthcare.
The standard solution for such a problem is Telemedicine, but the lack of direct automatic communication with medical devices and pre processing AI behind it hurts its scalability, resulting in very low adoption worldwide.
Portal Telemedicina is a digital healthcare platform that provides reliable, fast, low-cost online diagnostics to hundreds of cities in Brazil. Thanks to revolutionary communication protocols and AI automation, the solution enables interoperability across systems and patients. Exams are handled seamlessly from medical devices to diagnostics. The company counts on a huge proprietary dataset and uses Google's TensorFlow to train machine learning algorithms in millions of images and correlated health records to predict pathologies at human level accuracy.
Leveraging artificial intelligence to empower doctors, the startup helps millions of lives in Brazil and wants to expand and provide universal access to healthcare.
Each startup will get tailored, equity-free support, with the goal of successfully completing a ML-focused project during the term of the program. To support this process, we provide resources, including deep engagement with engineers in Google Cloud, Google X, and other product teams, as well as Google Cloud credits. We also include both Google Cloud Platform and GSuite training in our engagement with all Studio startups.
Based in San Francisco, Launchpad Studio is a fully tailored product development acceleration program that matches top ML startups and experts from Silicon Valley with the best of Google - its people, network, and advanced technologies - to help accelerate applied ML and AI innovation. The program's mandate is to support the growth of the ML ecosystem, and to develop product methodologies for ML.
Launchpad Studio is looking to work with the best and most game-changing ML startups from around the world. While we're currently focused on working with startups in the Healthcare and Biotech space, we'll soon be announcing other industry verticals, and any startup applying AI / ML technology to a specific industry vertical can apply on a rolling-basis.
This past year we worked hard to make the Google Assistant better for users and developers like you, but we also wanted to find new ways to reward you for doing what you love – building great apps for the Google Assistant.
So at I/O 2017, we announced our first Actions on Google Developer Challenge encouraging you to build helpful, entertaining apps for the Assistant. Today, we're announcing the competition's winners, chosen from thousands of entries.
In addition to the top three prize winners, we also selected winners among various categories including "best app by students," "best parenting app," "best life hack" and more. You can read up on all of the winners' apps here. Congratulations to our winners and to all those who submitted an app as part of the contest – we can't wait for users to check them out!
Happy holidays and happy New Year. We can't wait to see what the next year has in store.
Be sure to follow us on Twitter and check out the Google Assistant developer community program to stay in the know for 2018!
Correction: [January 4, 2018] Two previously announced winners were found ineligible according to the competition's terms. Updated winners available here.
As humans, we rely on sound to guide us through our environment, help us communicate with others and connect us with what's happening around us. Whether walking along a busy city street or attending a packed music concert, we're able to hear hundreds of sounds coming from different directions. So when it comes to AR, VR, games and even 360 video, you need rich sound to create an engaging immersive experience that makes you feel like you're really there. Today, we're releasing a new spatial audio software development kit (SDK) called Resonance Audio. It's based on technology from Google's VR Audio SDK, and it works at scale across mobile and desktop platforms.
Bringing rich, dynamic audio environments into your VR, AR, gaming, or video experiences without affecting performance can be challenging. There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments. The SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. We're also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback.
We know how important it is that audio solutions integrate seamlessly with your preferred audio middleware and sound design tools. With Resonance Audio, we've released cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows, so you can focus on creating more immersive audio. The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs. We also provide native APIs for C/C++, Java, Objective-C and the web. This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms. Sound designers can save time by using our new DAW plugin for accurately monitoring spatial audio that's destined for YouTube videos or apps developed with Resonance Audio SDKs. Web developers get the open source Resonance Audio Web SDK that works in the top web browsers by using the Web Audio API.
DAW plugin for sound designers to monitor audio destined for YouTube 360 videos or apps developed with the SDK
By providing powerful tools for accurately modeling complex sound environments, Resonance Audio goes beyond basic 3D spatialization. The SDK enables developers to control the direction acoustic waves propagate from sound sources. For example, when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned.
Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound. We've also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisonic soundfield playback is supported, from game engines to YouTube videos.
If you're interested in creating rich, immersive soundscapes using cutting-edge spatial audio technology, check out the Resonance Audio documentation on our developer site, let us know what you think through GitHub, and show us what you build with #ResonanceAudio on social media; we'll be resharing our favorites.
I'm happy to share that we opened registrations for the European installment of our global event series — Google Developer Days (GDD). Google Developer Days showcase our latest developer product and platform updates to help you develop high quality apps, grow & retain an active user base, and tap into tools to earn more.
Google Developer Days — Europe (GDD Europe) will take place on September 5-6 2017, in Krakow, Poland. We'll feature technical talks on a range of products including Android, the Mobile Web, Firebase, Cloud, Machine Learning, and IoT. In addition, we'll offer opportunities for you to join hands-on training sessions, and 1:1 time with Googlers and members of our Google Developers Experts community. We're looking forward to meeting you face-to-face so we can better understand your needs and improve our offerings for you.
If you're interested in joining us at GDD Europe, registration is now open.
Can't make it to Krakow? We've got you covered. All talks will be livestreamed on the Google Developers YouTube channel, and session recordings will be available there after the event. Looking to tune into the action with developers in your own neighborhood? Consider joining a GDD Extended event or organizing one for your local developer community .
Whether you're planning to join us in-person or remotely, stay up-to-date on the latest announcements using #GDDEurope on Twitter, Facebook, and Google+.
We're looking forward to seeing you in Europe soon!
Posted by Jason Douglas, PM Director for Actions on Google
The Google Assistant brings together all of the technology and smarts we've been building for years, from the Knowledge Graph to Natural Language Processing. To be a truly successful Assistant, it should be able to connect users across the apps and services in their lives. This makes enabling an ecosystem where developers can bring diverse and unique services to users through the Google Assistant really important.
In October, we previewed Actions on Google, the developer platform for the Google Assistant. Actions on Google further enhances the Assistant user experience by enabling you to bring your services to the Assistant. Starting today, you can build Conversation Actions for Google Home and request to become an early access partner for upcoming platform features.
Conversation Actions for Google Home
Conversation Actions let you engage your users to deliver information, services, and assistance. And the best part? It really is a conversation -- users won't need to enable a skill or install an app, they can just ask to talk to your action. For now, we've provided two developer samples of what's possible, just say "Ok Google, talk to Number Genie " or try "Ok Google, talk to Eliza' for the classic 1960s AI exercise.
You can get started today by visiting the Actions on Google website for developers. To help create a smooth, straightforward development experience, we worked with a number of development partners, including conversational interaction development tools API.AI and Gupshup, analytics tools DashBot and VoiceLabs and consulting companies such as Assist, Notify.IO, Witlingo and Spoken Layer. We also created a collection of samples and voice user interface (VUI) resources or you can check out the integrations from our early access partners as they roll out over the coming weeks.
Coming soon: Actions for Pixel and Allo + Support for Purchases and Bookings
Today is just the start, and we're excited to see what you build for the Google Assistant. We'll continue to add more platform capabilities over time, including the ability to make your integrations available across the various Assistant surfaces like Pixel phones and Google Allo. We'll also enable support for purchases and bookings as well as deeper Assistant integrations across verticals. Developers who are interested in creating actions using these upcoming features should register for our early access partner program and help shape the future of the platform.
We're always working to make Google Sign-In a better experience for developers and end users. Over the last year, we've simplified the user experience by reducing the default amount of information requested from the user and updated the branding. Major apps like The Guardian have taken advantage of these updates and we now see over twice as many people use Google Sign-In with their app.
The more streamlined experience begins with updated sign-in buttons that show the standard Google logo. We've updated the sign in button to reflect our new Google logo design. Furthermore, Google Sign-in now works for all users, not just those with a G+ profile. The consent screen has been redesigned so that the user sees inline the information that will be provided to the app (name, email, and profile photo) on Android and the web and iOS soon, too.
With these improvements in place, we are now announcing the migration from our Google+ Sign-In product to the new model. Making this change for your app is simple: just use the latest libraries with default sign-in configuration, or replace the "https://www.googleapis.com/auth/plus.login" scope with "profile" and update branding of the Google Sign-in button (your existing users will not be asked to sign-in again).
For developers who continue to use Google+ Sign-In scopes, expect some changes in behavior. New users going through the older sign-in flow will no longer be asked to share social graph data with your app. In the upcoming versions of SDKs on all platforms, we'll replace the Google+ branded assets with the new Google branding. So, if your app uses the default button, expect a new look and improved user experience with Google Sign-In. And after January 2017, calling our Plus People or Games Players APIs for users who had previously granted you access may begin returning empty results.
With these changes, we are deprecating the Plus People API. You can read the deprecation notes here: Android, Web. If your app needs social information and more extensive profile data, we have better alternatives for you. The new contacts-based People API provides a rich set of users' connections. To enhance the distribution of your app through the social graphs of your app's userbase, use the recently launched Firebase Invites, a cross-platform solution for sending personalized email and SMS invitations. On Android, you may also get rich cloud and device-based Contacts data from the Contacts Provider.
In addition to these user facing changes, we've also overhauled our Identity/authentication APIs to simplify implementation on both the client and server. Please check out our previous blog posts if you missed them:
Rob Jagnow, Software Engineer, Google VR
Whether you're playing a game or watching a video, VR lets you step inside a new world and become the hero of a story. But what if you want to tell a story of your own?
Producing immersive 3D animation can be difficult and expensive. It requires complex software to set keyframes with splined interpolation or costly motion capture setups to track how live actors move through a scene. Professional animators spend considerable effort to create sequences that look expressive and natural.
At Daydream Labs, we've been experimenting with ways to reduce technical complexity and even add a greater sense of play when animating in VR. In one experiment we built, people could bring characters to life by picking up toys, moving them through space and time, and then replay the scene.
As we saw people play with the animation experiment we built, we noticed a few things:
The need for complex metaphors goes away in VR: What can be complicated in 2D can be made intuitive in 3D. Instead of animating with graph editors or icons representing location, people could simply reach out, grab a virtual toy, and carry it through the scene. These simple animations had a handmade charm that conveyed a surprising degree of emotion.
The learning curve drops to zero: People were already familiar with how to interact with real toys, so they jumped right in and got started telling their stories. They didn't need a lengthy tutorial, and they were able to modify their animations and even add new characters without any additional help.
People react to virtual environments the same way they react to real ones: When people entered a playful VR environment, they understood it was safe space to play with the toys around them. They felt comfortable performing and speaking in funny voices. They took more risks knowing the virtual environment was designed for play.
To create more intricate animations, we also built another experiment that let people independently animate the joints of a single character. It let you record your character’s movement as you separately animated the feet, hands, and head — just like you would with a puppet.
VR allows us to rethink software and make certain use cases more natural and intuitive. While this kind of animation system won’t replace professional tools, it can allow anyone to tell their own stories. There are many examples of using VR for storytelling, especially with video and animation, and we’re excited to see new perspectives as more creators share their stories in VR.
Posted by Wesley Chun (@wescpy), Developer Advocate, Google Apps
At Google I/O 2016, we launched a new Google Sheets API—click here to watch the entire announcement. The updated API includes many new features that weren’t available in previous versions, including access to functionality found in the Sheets desktop and mobile user interfaces. My latest DevByte video shows developers how to get data into and out of a Google Sheet programmatically, walking through a simple script that reads rows out of a relational database and transferring the data to a brand new Google Sheet.
Let’s take a sneak peek of the code covered in the video. Assuming that SHEETS has been established as the API service endpoint, SHEET_ID is the ID of the Sheet to write to, and data is an array with all the database rows, this is the only call developers need to make to write that raw data into the Sheet:
SHEETS
SHEET_ID
data
SHEETS.spreadsheets().values().update(spreadsheetId=SHEET_ID, range='A1', body=data, valueInputOption='RAW').execute()
rows = SHEETS.spreadsheets().values().get(spreadsheetId=SHEET_ID, range='Sheet1').execute().get('values', []) for row in rows: print(row)
If you’re ready to get started, take a look at the Python or other quickstarts in a variety of languages before checking out the DevByte. If you want a deeper dive into the code covered in the video, check out the post at my Python blog. Once you get going with the API, one of the challenges developers face is in constructing the JSON payload to send in API calls—the common operations samples can really help you with this. Finally, if you’re ready to get going with a meatier example, check out our JavaScript codelab where you’ll write a sample Node.js app that manages customer orders for a toy company, the database of which is used in this DevByte, preparing you for the codelab.
We hope all these resources help developers create amazing applications and awesome tools with the new Google Sheets API! Please subscribe to our channel, give us your feedback below, and tell us what topics you would like to see in future episodes!
Posted by Mike Procopio, Engineering Manager, Google Drive and Wesley Chun, Developer Advocate, Google Apps
WhatsApp is one of the most popular mobile apps in the world. Over a billion users send and receive over 42 billion messages, photos, and videos every day. It's fast, easy to use, and reliable.
But what happens when people lose their phone or otherwise upgrade to a new one? All those messages and memories would be gone. So we worked with WhatsApp to give their users the ability to back up their data to Google Drive and restore it when they setup WhatsApp on a new phone. With messages and media safely stored in your Drive, there’s no more worry about losing any of those memories.
One of the biggest challenges for an integration of this scope is scaling. How do you back up data for a billion users? Many things were done to ensure the feature works as intended and is unnoticeable by users. Our approach? First, we relied on a proven infrastructure that can handle this kind of volume—Google Drive. Next, we optimized what to back up and when to do the backups—the key was to upload only incremental changes rather than transmit identical files.
On the server side (backend), we focused on optimizing byte storage as well as the number of network calls between WhatsApp and Google. As far as deployment goes, we rolled out slowly over several months to minimize the size and impact of deployment.
If you have ever used WhatsApp, you know how it gets out of your way, and lets you get started quickly: no account creation, no passwords to manage, and no user IDs to remember or exchange. This sets a high bar for any integration with WhatsApp: for it to feel like a natural part of WhatsApp, it has to be as seamless, fast, and reliable as WhatsApp itself.
By using the Google Drive API, we were able to achieve this: no need to type in any usernames or passwords, just a few taps in the app, and WhatsApp starts backing up. The best part is that all the tools used in the integration are available to all developers. With the Google Drive API, seamless and scalable integrations are as easy to use for the user as they are for developers.
To learn more about how we did it and get all the details, check out the complete talk we gave together with WhatsApp at Google I/O 2016.
Are you ready to integrate your web and mobile apps with Google Drive? Get started today by checking out our intro video as well as the video demoing the newest API, then dig in with the developer docs found at developers.google.com/drive. We're excited to see what you build next with the Drive API—and we're ready to scale with you!
Posted by Jeff Nusz, Data Arts Team, Pixel Painter
Two weeks ago, we introduced Tilt Brush, a new app that enables artists to use virtual reality to paint the 3D space around them. Part virtual reality, part physical reality, it can be difficult to describe how it feels without trying it firsthand. Today, we bring you a little closer to the experience of painting with Tilt Brush using the powers of the web in a new Chrome Experiment titled Virtual Art Sessions.
Virtual Art Sessions lets you observe six world-renowned artists as they develop blank canvases into beautiful works of art using Tilt Brush. Each session can be explored from start to finish from any angle, including the artist’s perspective – all viewable right from the browser.
Participating artists include illustrator Christoph Niemann, fashion illustrator Katie Rodgers, sculptor Andrea Blasich, installation artist Seung Yul Oh, automotive concept designer Harald Belker, and street artist duo Sheryo & Yok. The artists’ unique approaches to this new medium become apparent when seeing them work inside their Tilt Brush creations. Watch this behind-the-scenes video to hear what the artists had to say about their experience:
Virtual Art Sessions makes use of Google Chrome’s V8 Javascript engine for high-performance processing power to render large volumes of data in real time. This includes point cloud data of the artist’s physical form, 3D geometry data of the artwork, and position data of the VR controllers. It also relies on Chrome’s support of WebM video and WebGL to produce the 360° representations of the artists and artwork – the artist portrayals alone require the browser to draw over 200,000 points at 30 times a second. For a deeper look, read the technical case study or browse the project code that is available open source from the site’s tech page.
We hope this experiment provides a window into the world of painting in virtual reality using Tilt Brush. We are excited by this new medium and hope the experience leaves you feeling the same. Visit g.co/VirtualArtSessions to start exploring.
Originally posted on Android Developers blog
Posted by Morgan Dollard, Product Manager of Google Play Games
With mobile gamers across 190 countries, Google Play Games is made up of a vibrant and diverse gaming community. And these players are more engaged than ever. Over the past year, the number of games reaching over 1 million installs grew by 50 percent.
Today, at our annual Developer Day at the Game Developers Conference, we announced new platform and ads tools for developers, of all sizes reach, to reach this global audience and accelerate the growth of their games business. Check out below the full range of features that will help game developers build their apps, grow their users base, and earn more revenue.
In February, we introduced Gamer IDs so that anyone could create a gaming persona. We also simplified the sign-in process for Google Play Games so players could pick up playing their game more quickly. We’re also working on product enhancements to make Play Games a little more social and fun, which will mean more engaged players who’re playing your game for longer. One example is the launch of Gamer friends (coming soon!), where your players can add and interact with their friends from within the Google Play Games app (without needing a Google+ account).
We’re also launching the Indie Corner, a new collection on Google Play, that will highlight amazing games built by indie developers. You can nominate your awesome indie game for inclusion at g.co/indiecornersubmission. We’ll pick the best games to showcase based on the quality of the experience and exemplary use of Google Play game services.
In January, we added features to Player Analytics, the free reporting tool of Google Play game services, which helps you understand how players are progressing, spending and churning. Today, we previewed some upcoming new tools that would be available in the coming months, including:
Promoting your game and growing your audience is important, but it’s just as important to reach the right audience for your game, the players who want to open the game again and again. That’s why today we’ve unveiled new features that make it simpler to reach the right audience at scale.
AdMob helps game developers around the world maximize revenue through in-app advertising. At GDC, we also announced a new way to help you earn more through AdMob Mediation. Rewarded advertising is a popular form of game monetization -- users are given the choice to engage with ads in exchange for an in-app reward. AdMob Mediation will enable you to easily monetize your apps with rewarded video ads from a number of ad providers. Supported networks and platforms include AdColony, AppLovin, Chartboost, Fyber, Upsight and Vungle, with more being added all the time.
You can learn more about this, and all our ads announcements on the Inside AdWords blog.
This is just the start of what we’ve got planned for 2016. We hope you can make use of these tools to improve your game, engage your audience, and grow your business and revenue.
Posted by Laurence Moroney, Developer Advocate
Google Cloud Messaging (GCM) is an infrastructure that allows you to do simple and reliable messaging to distribute your messages to and across many devices.
Every day, GCM delivers over 150 Billion messages to devices on various platforms including Android Devices, iOS Devices and Web Browsers. It has a number of different techniques for sending messages:
Single devices. Each device has a unique registration token. If you want to reach that device -- for example using GCM to build a 1:1 chat application, you can do so, addressing it via that token.
Device Groups allow you to bundle devices together into a group. For example, one of your users might have multiple devices -- including the very common scenario of having both a phone and a tablet. Using Device Groups in GCM, you can send a message to all of her devices, and if you desire, you can implement your app so that dismissing on one dismisses on all.
Topics allow you to create interest groups for your users. Once they subscribe to a topic, you can send messages to that topic, and your users will receive them. There’s no subscription limit to these, so you don’t have to worry about how many users subscribe to your topics! Some great scenarios of topics being used to improve user experience can be found in this blog post.
When it comes to reliability of messages, an internal study at Google found that the majority of notification messages (95th percentile) are delivered within 250ms to connected devices. Connectivity is impacted by many factors -- including carriers, routers and local connectivity. Indeed, in some locales it is common for people to disable mobile data for large parts of the day in order to save on bandwidth costs. In this scenario, users will still receive their notifications once they re-activate their data connection.
We’ve provided a number of resources to help you to build apps using GCM. Check out this talk where you are taken step by step through building an Android app and an associated server in PHP. There’s also an open source ‘GCM Playground’ on GitHub here, which provides a sample server implementation that runs on the Google Cloud Platform!
If you want to reach iOS users, today we’re adding an API that will help you to migrate your existing infrastructure to send notifications to iOS device, with no client code changes required. With the new batch Import API you can import the APNs device tokens that you collected from your iOS audience into GCM, and immediately start sending notifications through GCM. After you import the APNs device tokens, you can also use the InstanceID API to transparently subscribe users to GCM topics, achieving efficient fan-out of notifications based on interest groups, once again with no changes required on client code.
We’re continuing to build and innovate on this platform -- stay tuned for lots of cool new features coming soon!
You can learn more about Google Cloud Messaging on the Developers site here, including quickstarts for Android and iOS!
Originally posted on Android Developers Blog
Posted by Roy Glasberg Global Lead, Launchpad Program & Accelerator
Last month, 24 promising startups from India, Indonesia, and Brazil came to Silicon Valley to participate in Google’s Launchpad Accelerator, a new program that provides late-stage startups (mobile apps) with mentoring and resources to successfully scale in their local economies.
During the intensive two-week Accelerator kickoff in our Mountain View headquarters, Google engineers from 11 product areas, as well as experts from other companies, were on hand to provide startups with mentorship on how to scale and monetize their apps, and ultimately, build successful businesses. Now back in their home countries, the teams will continue developing their products with the support of up to $50,000 in equity-free funding, six more months of ongoing mentorship, and a breadth of developer tools from the Launchpad Accelerator program.
So far, many startup participants have already seen an immediate impact. Two weeks after attending the kickoff event, Brazilian mobile game developer UpBeat Games was featured on Google Play and saw a 1,000% increase in app installations in Asia, as well as a 200% overall increase in active users, by leveraging analytics to better understand their users.
According to UpBeat Games founder Vinicius Heimbeck, “By working one-on-one with the mentors, we learned that we needed to be a data-driven company. We now have the right analytics tools to measure the results of our efforts and to learn from them to optimize the user experience. This all directly impacted our huge success once we were featured on Google Play.”
eFishery, an Indonesian startup that produces smart automated fish feeders, turned its focus on scaling since attending Launchpad Accelerator. “The mentors gave us great insight about how to build a scalable product and how to engage billions of users,” said co-founder and CEO Gibran Chuzaefah Amsi El Farizy. “We received both technical and practical advice on our business, from building back-end technology to embracing failure with the right mindset.”
Apply now for Launchpad Accelerator We are also excited to announce the second class for Launchpad Accelerator which will begin in June 2016.
If you are a startup from India, Indonesia, Brazil, or Mexico (a new addition!) and are interested in participating in the next wave, we encourage you to apply here by March 31. We expect to continue adding more countries to the program in the future, so be on the lookout!