We're excited to announce the three new startups joining Launchpad Studio, our 6-month mentorship program tailored to help applied-machine learning startups build great products using the most advanced tools and technologies available. We intend to support these startups by leveraging some of our platforms like Google Cloud Platform, TensorFlow, and Android, while also providing one-on-one support from product and research experts from several Google teams including Google Cloud, Verily, X, Brain, and ML Research. Launchpad Studio has also enlisted the expertise of a number of top industry practitioners and thought leaders to ensure Studio startups are successful over the long-term. These three startups were selected based on the novel ways they've applied ML to important challenges in the Healthcare industry:
The annual cost of treating heart failures in the US is currently estimated to be ~$40bn annually. With the continued aging of the US population, the impact of Congestive Heart Failure is expected to increase substantially.
Through light-weight, low-cost cloth-based form factors, Nanowear can capture and transmit medical-grade data directly from the skin enabling deep analytics and prescriptive recommendations. As a first product application, Nanowear's SimpleSense aims to transform Congestive Heart Failure management.
Nanowear intends to develop predictive models that provide both physicians and patients with leading indicators and data to anticipate potential hospitalizing events. Combining these datasets with deep machine learning capabilities will position Nanowear at the epicenter of the next generation of telemedicine and connected-self healthcare.
With the big data revolution, the medical and scientific communities have more information to work with than in all of history combined. However, with such a wealth of information, it is increasingly difficult to differentiate productive leads from dead ends.
Artificial intelligence and machine learning powered by systems biology can organize, validate, predict and compare the overabundance of information. Owkin builds mathematical models and algorithms that can interpret omics, visual data, biostatistics and patient profiles.
Owkin is focused on federated learning in healthcare to overcome the data sharing problem, building collective intelligence from distributed data.
A low percentage of healthcare specialists per patient and no interoperability between medical devices causes exam results in Brazil to take an average of 60 days to be ready, cost hundreds of dollars, and leaves millions of people with no access to quality healthcare.
The standard solution for such a problem is Telemedicine, but the lack of direct automatic communication with medical devices and pre processing AI behind it hurts its scalability, resulting in very low adoption worldwide.
Portal Telemedicina is a digital healthcare platform that provides reliable, fast, low-cost online diagnostics to hundreds of cities in Brazil. Thanks to revolutionary communication protocols and AI automation, the solution enables interoperability across systems and patients. Exams are handled seamlessly from medical devices to diagnostics. The company counts on a huge proprietary dataset and uses Google's TensorFlow to train machine learning algorithms in millions of images and correlated health records to predict pathologies at human level accuracy.
Leveraging artificial intelligence to empower doctors, the startup helps millions of lives in Brazil and wants to expand and provide universal access to healthcare.
Each startup will get tailored, equity-free support, with the goal of successfully completing a ML-focused project during the term of the program. To support this process, we provide resources, including deep engagement with engineers in Google Cloud, Google X, and other product teams, as well as Google Cloud credits. We also include both Google Cloud Platform and GSuite training in our engagement with all Studio startups.
Based in San Francisco, Launchpad Studio is a fully tailored product development acceleration program that matches top ML startups and experts from Silicon Valley with the best of Google - its people, network, and advanced technologies - to help accelerate applied ML and AI innovation. The program's mandate is to support the growth of the ML ecosystem, and to develop product methodologies for ML.
Launchpad Studio is looking to work with the best and most game-changing ML startups from around the world. While we're currently focused on working with startups in the Healthcare and Biotech space, we'll soon be announcing other industry verticals, and any startup applying AI / ML technology to a specific industry vertical can apply on a rolling-basis.
We're delighted to announce that TensorFlow 1.5 is now public! Install it now to get a bunch of new features that we hope you'll enjoy!
First off, Eager Execution for TensorFlow is now available as a preview. We've heard lots of feedback about the programming style of TensorFlow, and how developers really want an imperative, define-by-run programming style. With Eager Execution for TensorFlow enabled, you can execute TensorFlow operations immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.
For example, think of a simple computation like a matrix multiplication. Today, in TensorFlow it would look something like this:
x = tf.placeholder(tf.float32, shape=[1, 1]) m = tf.matmul(x, x) with tf.Session() as sess: print(sess.run(m, feed_dict={x: [[2.]]}))
If you enable Eager Execution for TensorFlow, it will look more like this:
x = [[2.]] m = tf.matmul(x, x) print(m)
You can learn more about Eager Execution for TensorFlow here (check out the user guide linked at the bottom of the page, and also this presentation) and the API docs here.
The Developer preview of TensorFlow Lite is built into version 1.5. TensorFlow Lite, TensorFlow's lightweight solution for mobile and embedded devices, lets you take a trained TensorFlow model and convert it into a .tflite file which can then be executed on a mobile device with low-latency. Thus the training doesn't have to be done on the device, nor does the device need to upload data to the cloud to have it worked upon. So, for example, if you want to classify an image, a trained model could be deployed to the device and classification of the image is done on-device directly.
TensorFlow Lite includes a sample app to get you started. This app uses the MobileNet model of 1001 unique image categories. It recognizes an image and matches it to a number of categories, listing the top 3 that it recognizes. The app is available on both Android and iOS.
You can learn more about TensorFlow Lite, and how to convert your models to be available on mobile here.
If you are using GPU Acceleration on Windows or Linux, TensorFlow 1.5 now has CUDA 9 and cuDNN 7 support built-in.
To learn more about NVidia's Compute Unified Device Architecture (CUDA) 9, check out NVidia's site here.
This is enhanced by the CUDA Deep Neural Network Library (cuDNN), the latest release of which is version 7. Support for this is now included in TensorFlow 1.5.
Here are some Medium articles on GPU support on Windows and Linux, and how to install them on your workstation (if it supports the requisite hardware)
In line with this release we've also overhauled the documentation site, including an improved Getting Started flow that will get you from no knowledge to building a neural network to classify different types of iris in a very short time. Check it out!
Beyond these features, there's lots of other enhancements to Accelerated Linear Algebra (XLA), updates to RunConfig and much more. Check the release notes here.
To get TensorFlow 1.5, you can use the standard pip installation (or pip3 if you use python3)
$ pip install --ignore-installed --upgrade tensorflow