Posted by Vikram Tank (Product Manager), Coral Team
Coral’s had a busy summer working with customers, expanding distribution, and building new features — and of course taking some time for R&R. We’re excited to share updates, early work, and new models for our platform for local AI with you.
The compiler has been updated to version 2.0, adding support for models built using post-training quantization—only when using full integer quantization (previously, we required quantization-aware training)—and fixing a few bugs. As the Tensorflow team mentions in their Medium post “post-training integer quantization enables users to take an already-trained floating-point model and fully quantize it to only use 8-bit signed integers (i.e. `int8`).” In addition to reducing the model size, models that are quantized with this method can now be accelerated by the Edge TPU found in Coral products.
We've also updated the Edge TPU Python library to version 2.11.1 to include new APIs for transfer learning on Coral products. The new on-device back propagation API allows you to perform transfer learning on the last layer of an image classification model. The last layer of a model is removed before compilation and implemented on-device to run on the CPU. It allows for near-real time transfer learning and doesn’t require you to recompile the model. Our previously released imprinting API, has been updated to allow you to quickly retrain existing classes or add new ones while leaving other classes alone. You can now even keep the classes from the pre-trained base model. Learn more about both options for on-device transfer learning.
Until now, accelerating your model with the Edge TPU required that you write code using either our Edge TPU Python API or in C++. But now you can accelerate your model on the Edge TPU when using the TensorFlow Lite interpreter API, because we've released a TensorFlow Lite delegate for the Edge TPU. The TensorFlow Lite Delegate API is an experimental feature in TensorFlow Lite that allows for the TensorFlow Lite interpreter to delegate part or all of graph execution to another executor—in this case, the other executor is the Edge TPU. Learn more about the TensorFlow Lite delegate for Edge TPU.
Coral has also been working with Edge TPU and AutoML teams to release EfficientNet-EdgeTPU: a family of image classification models customized to run efficiently on the Edge TPU. The models are based upon the EfficientNet architecture to achieve the image classification accuracy of a server-side model in a compact size that's optimized for low latency on the Edge TPU. You can read more about the models’ development and performance on the Google AI Blog, and download trained and compiled versions on the Coral Models page.
And, as summer comes to an end we also want to share that Arrow offers a student teacher discount for those looking to experiment with the boards in class or the lab this year.
We're excited to keep evolving the Coral platform, please keep sending us feedback at coral-support@google.com.
Posted by Brahim Elbouchikhi, Director of Product Management and Matej Pfajfar, Engineering Director
We launched ML Kit at I/O last year with the mission to simplify Machine Learning for everyone. We couldn’t be happier about the experiences that ML Kit has enabled thousands of developers to create. And more importantly, user engagement with features powered by ML Kit is growing more than 60% per month. Below is a small sample of apps we have been working with.
But there is a lot more. At I/O this year, we are excited to introduce four new features.
The Object Detection and Tracking API lets you identify the prominent object in an image and then track it in real-time. You can pair this API with a cloud solution (e.g. Google Cloud’s Product Search API) to create a real-time visual search experience.
When you pass an image or video stream to the API, it will return the coordinates of the primary object as well as a coarse classification. The API then provides a handle for tracking this object's coordinates over time.
A number of partners have built experiences that are powered by this API already. For example, Adidas built a visual search experience right into their app.
The On-device Translation API allows you to use the same offline models that support Google Translate to provide fast, dynamic translation of text in your app into 58 languages. This API operates entirely on-device so the context of the translated text never leaves the device.
You can use this API to enable users to communicate with others who don't understand their language or translate user-generated content.
To the right, we demonstrate the use of ML Kit’s text recognition, language detection, and translation APIs in one experience.
We also collaborated with the Material Design team to produce a set of design patterns for integrating ML into your apps. We are open sourcing implementations of these patterns and hope that they will further accelerate your adoption of ML Kit and AI more broadly.
Our design patterns for machine learning powered features will be available on the Material.io site.
With AutoML Vision Edge, you can easily create custom image classification models tailored to your needs. For example, you may want your app to be able to identify different types of food, or distinguish between species of animals. Whatever your need, just upload your training data to the Firebase console and you can use Google’s AutoML technology to build a custom TensorFlow Lite model for you to run locally on your user's device. And if you find that collecting training datasets is hard, you can use our open source app which makes the process simpler and more collaborative.
Wrapping up
We are excited by this first year and really hope that our progress will inspire you to get started with Machine Learning. Please head over to g.co/mlkit to learn more or visit Firebase to get started right away.
Coral has been public for about a month now, and we’ve heard some great feedback about our products. As we evolve the Coral platform, we’re making our products easier to use and exposing more powerful tools for building devices with on-device AI.
Today, we're updating the Edge TPU model compiler to remove the restrictions around specific architectures, allowing you to submit any model architecture that you want. This greatly increases the variety of models that you can run on the Coral platform. Just be sure to review the TensorFlow ops supported on Edge TPU and model design requirements to take full advantage of the Edge TPU at runtime.
We're also releasing a new version of Mendel OS (3.0 Chef) for the Dev Board with a new board management tool called Mendel Development Tool (MDT).
To help with the developer workflow, our new C++ API works with the TensorFlow Lite C++ API so you can execute inferences on an Edge TPU. In addition, both the Python and C++ APIs now allow you to run multiple models in parallel, using multiple Edge TPU devices.
In addition to these updates, we’re adding new capabilities to Coral with the release of the Environmental Sensor Board. It’s an accessory board for the Coral Dev Platform (and Raspberry Pi) that brings sensor input to your models. It has integrated light, temperature, humidity, and barometric sensors, and the ability to add more sensors via it's four Grove connectors. The secure element on-board also allows for easy communication with the Google Cloud IOT Core.
The team has also been working with partners to help them evaluate whether Coral is the right fit for their products. We’re excited that Oivi has chosen us to be the base platform of their new handheld AI-camera. This product will help prevent blindness among diabetes patients by providing early, automated detection of diabetic retinopathy. Anders Eikenes, CEO of Oivi, says “Oivi is dedicated towards providing patient-centric eye care for everyone - including emerging markets. We were honoured to be selected by Google to participate in their Coral alpha program, and are looking forward to our continued cooperation. The Coral platform gives us the ability to run our screening ML models inside a handheld device; greatly expanding the access and ease of diabetic retinopathy screening.”
Finally, we’re expanding our distributor network to make it easier to get Coral boards into your hands around the world. This month, Seeed and NXP will begin to sell Coral products, in addition to Mouser.
You can see the full release notes on Coral site.
Posted by Roy Glasberg, Head of Launchpad
Launchpad's mission is to accelerate innovation and to help startups build world-class technologies by leveraging the best of Google - its people, network, research, and technology.
In September 2018, the Launchpad team welcomed ten of the world's leading FinTech startups to join their accelerator program, helping them fast-track their application of advanced technology. Today, March 15th, we will see this cohort graduate from the program at the Launchpad team's inaugural event - The Future of Finance - a global discussion on the impact of applied ML/AI on the finance industry. These startups are ensuring that everyone has relevant insights at their fingertips and that all people, no matter where they are, have access to equitable money, banking, loans, and marketplaces.
Tune into the event from wherever you are via the livestream link
The Graduating Class of Launchpad FinTech Accelerator San Francisco'19
Since joining the accelerator, these startups have made great strides and are going from strength to strength. Some recent announcements from this cohort include:
We look forward to following the success of all our participating founders as they continue to make a significant impact on the global economy.
Want to know more about the Launchpad Accelerator? Visit our site, stay updated on developments and future opportunities by subscribing to the Google Developers newsletter and visit The Launchpad Blog.
Posted by Billy Rutledge (Director) and Vikram Tank (Product Mgr), Coral Team
AI can be beneficial for everyone, especially when we all explore, learn, and build together. To that end, Google's been developing tools like TensorFlow and AutoML to ensure that everyone has access to build with AI. Today, we're expanding the ways that people can build out their ideas and products by introducing Coral into public beta.
Coral is a platform for building intelligent devices with local AI.
Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production. It includes hardware components, software tools, and content that help you create, train and run neural networks (NNs) locally, on your device. Because we focus on accelerating NN's locally, our products offer speedy neural network performance and increased privacy — all in power-efficient packages. To help you bring your ideas to market, Coral components are designed for fast prototyping and easy scaling to production lines.
Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.
Coral Camera Module, Dev Board and USB Accelerator
For new product development, the Coral Dev Board is a fully integrated system designed as a system on module (SoM) attached to a carrier board. The SoM brings the powerful NXP iMX8M SoC together with our Edge TPU coprocessor (as well as Wi-Fi, Bluetooth, RAM, and eMMC memory). To make prototyping computer vision applications easier, we also offer a Camera that connects to the Dev Board over a MIPI interface.
To add the Edge TPU to an existing design, the Coral USB Accelerator allows for easy integration into any Linux system (including Raspberry Pi boards) over USB 2.0 and 3.0. PCIe versions are coming soon, and will snap into M.2 or mini-PCIe expansion slots.
When you're ready to scale to production we offer the SOM from the Dev Board and PCIe versions of the Accelerator for volume purchase. To further support your integrations, we'll be releasing the baseboard schematics for those who want to build custom carrier boards.
Our software tools are based around TensorFlow and TensorFlow Lite. TF Lite models must be quantized and then compiled with our toolchain to run directly on the Edge TPU. To help get you started, we're sharing over a dozen pre-trained, pre-compiled models that work with Coral boards out of the box, as well as software tools to let you re-train them.
For those building connected devices with Coral, our products can be used with Google Cloud IoT. Google Cloud IoT combines cloud services with an on-device software stack to allow for managed edge computing with machine learning capabilities.
Coral products are available today, along with product documentation, datasheets and sample code at g.co/coral. We hope you try our products during this public beta, and look forward to sharing more with you at our official launch.
Posted by Billy Rutledge, Director of AIY Projects
Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.
The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.
The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.
The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.
On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.
Both devices will be available online this fall in the US with other countries to follow shortly.
For more product information visit g.co/aiy and sign up to be notified as products become available.
Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We're seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven't yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.
We're taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.
To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.
AIY Voice Kit v2 includes Raspberry Pi Zero WH and pre-provisioned SD card
AIY Vision Kit v1.1 includes Raspberry Pi Zero WH, Raspberry Pi Cam 2 and pre-provisioned SD card
We're also introducing the AIY companion app for Android, available here in Google Play, to make wireless setup and configuration a snap. The kits still work with monitor, keyboard and mouse as an alternate path and we're working on iOS and Chrome companions which will be coming soon.
The AIY website has been refreshed with improved documentation, now easier for young makers to get started and learn as they build. It also includes a new AIY Models area, showcasing a collection of neural networks designed to work with AIY kits. While we've solved one barrier to entry for the STEM audience, we recognize that there are many other things that we can do to make our kits even more useful. We'll once again be at #MakerFaire events to gather feedback from our users and in June we'll be working with teachers from all over the world at the ISTE conference in Chicago.
The new AIY Voice Kit and Vision Kit have arrived at Target Stores and Target.com (US) this month and we're working to make them globally available through retailers worldwide. Sign up on our mailing list to be notified when our products become available.
We hope you'll pick up one of the new AIY kits and learn more about how to build your own smart devices. Be sure to share your recipes on Hackster.io and social media using #aiyprojects.
We're delighted to announce that TensorFlow 1.5 is now public! Install it now to get a bunch of new features that we hope you'll enjoy!
First off, Eager Execution for TensorFlow is now available as a preview. We've heard lots of feedback about the programming style of TensorFlow, and how developers really want an imperative, define-by-run programming style. With Eager Execution for TensorFlow enabled, you can execute TensorFlow operations immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.
For example, think of a simple computation like a matrix multiplication. Today, in TensorFlow it would look something like this:
x = tf.placeholder(tf.float32, shape=[1, 1]) m = tf.matmul(x, x) with tf.Session() as sess: print(sess.run(m, feed_dict={x: [[2.]]}))
If you enable Eager Execution for TensorFlow, it will look more like this:
x = [[2.]] m = tf.matmul(x, x) print(m)
You can learn more about Eager Execution for TensorFlow here (check out the user guide linked at the bottom of the page, and also this presentation) and the API docs here.
The Developer preview of TensorFlow Lite is built into version 1.5. TensorFlow Lite, TensorFlow's lightweight solution for mobile and embedded devices, lets you take a trained TensorFlow model and convert it into a .tflite file which can then be executed on a mobile device with low-latency. Thus the training doesn't have to be done on the device, nor does the device need to upload data to the cloud to have it worked upon. So, for example, if you want to classify an image, a trained model could be deployed to the device and classification of the image is done on-device directly.
TensorFlow Lite includes a sample app to get you started. This app uses the MobileNet model of 1001 unique image categories. It recognizes an image and matches it to a number of categories, listing the top 3 that it recognizes. The app is available on both Android and iOS.
You can learn more about TensorFlow Lite, and how to convert your models to be available on mobile here.
If you are using GPU Acceleration on Windows or Linux, TensorFlow 1.5 now has CUDA 9 and cuDNN 7 support built-in.
To learn more about NVidia's Compute Unified Device Architecture (CUDA) 9, check out NVidia's site here.
This is enhanced by the CUDA Deep Neural Network Library (cuDNN), the latest release of which is version 7. Support for this is now included in TensorFlow 1.5.
Here are some Medium articles on GPU support on Windows and Linux, and how to install them on your workstation (if it supports the requisite hardware)
In line with this release we've also overhauled the documentation site, including an improved Getting Started flow that will get you from no knowledge to building a neural network to classify different types of iris in a very short time. Check it out!
Beyond these features, there's lots of other enhancements to Accelerated Linear Algebra (XLA), updates to RunConfig and much more. Check the release notes here.
To get TensorFlow 1.5, you can use the standard pip installation (or pip3 if you use python3)
$ pip install --ignore-installed --upgrade tensorflow
Since we released AIY Voice Kit, we've been inspired by the thousands of amazing builds coming in from the maker community. Today, the AIY Team is excited to announce our next project: the AIY Vision Kit — an affordable, hackable, intelligent camera.
Much like the Voice Kit, our Vision Kit is easy to assemble and connects to a Raspberry Pi computer. Based on user feedback, this new kit is designed to work with the smaller Raspberry Pi Zero W computer and runs its vision algorithms on-device so there's no cloud connection required.
The kit materials list includes a VisionBonnet, a cardboard outer shell, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, flex cables, standoffs, a tripod mounting nut and connecting components.
The VisionBonnet is an accessory board for Raspberry Pi Zero W that features the Intel® Movidius™ MA2450, a low-power vision processing unit capable of running neural networks. This will give makers visual perception instead of image sensing. It can run at speeds of up to 30 frames per second, providing near real-time performance.
Bundled with the software image are three neural network models:
For those of you who have your own models in mind, we've included the original TensorFlow code and a compiler. Take a new model you have (or train) and run it on the the Intel® Movidius™ MA2450.
The AIY Vision Kit is completely hackable:
We hope you'll use it to solve interesting challenges, such as:
AIY Vision Kits will be available in December, with online pre-sales at Micro Center starting today.
*** Please note that AIY Vision Kit requires Raspberry Pi Zero W, Raspberry Pi Camera V2 and a micro SD card, which must be purchased separately.
We're listening — let us know how we can improve our kits and share what you're making using the #AIYProjects hashtag on social media. We hope AIY Vision Kit inspires you to build all kinds of creative devices.
Posted by Billy Rutledge, Director, AIY Projects
Makers are hands-on when it comes to making change. We're explorers, hackers and problem solvers who build devices, ecosystems, art (sometimes a combination of the three) on the basis of our own (often unconventional) ideas. So when my team first sought out to empower makers of all types and ages with the AI technology we've honed at Google, we knew whatever we built had to be open and accessible. We stayed clear of limitations that come from platform and software stack requirements, high cost and complex set up, and fixed our focus on the curiosity and inventiveness that inspire makers around the world.
When we launched our Voice Kit with help from our partner Raspberry Pi in May and sold out globally in just a few hours, we got the message loud and clear. There is a genuine demand among do-it-yourselfers for artificial intelligence that makes human-to-machine interaction more like natural human interaction.
Last week we announced the Speech Commands Dataset, a collaboration between the TensorFlow and AIY teams. The dataset has 65,000 one-second long utterances of 30 short words by thousands of different contributors of the AIY website and allows you to build simple voice interfaces for applications. We're currently in the process of integrating the dataset with the next release of the Voice Kit, so makers could build devices that respond to simple voice commands without the press of a button or an internet connection.
Today, you can pre-order your Voice Kit, which will be available for purchase in stores and online through Micro Center.
Or you may have to resort to the hack that maker Shivasiddarth created when Voice Kit with MagPi #57 sold out in May, and then again (within 17 minutes) earlier this month.
Martin Mander created a retro-inspired intercom that he calls 1986 Google Pi Intercom. He describes it as "a wall-mounted Google voice assistant using a Raspberry PI 3 and the Google AIY (Artificial Intelligence Yourself) [voice] kit." He used a mid-80s intercom that he bought on sale for £4. It cleaned up well!
Get the full story from Martin and see what Slashgear had to say about the project.
(This one's for Dr. Who fans) Tom Minnich created a Dalek-voiced assistant.
He offers a tutorial on how you can modify the Voice Kit to do something similar — perhaps create a Drogon-voiced assistant?
Victor Van Hee used the Voice Kit to create a voice-activated internet streaming radio that can play other types of audio files as well. He provides instructions, so you can do the same.
The Voice Kit is currently available in the U.S. We'll be expanding globally by the end of this year. Stay tuned here, where we'll share the latest updates. The strong demand for the Voice Kit drives us to keep the momentum going on AIY Projects.
What we build next will include vision and motion detection and will go hand in hand with our existing Voice Kit. AIY Project kits will soon offer makers the "eyes," "ears," "voice" and sense of "balance" to allow simple yet powerful device interfaces.
We'd love to bake your input into our next releases. Go to hackster.io or leave a comment to start up a conversation with us. Show us and the maker community what you're working on by using hashtag #AIYprojects on social media.