Google Research Blog
The latest news from Research at Google
Wide & Deep Learning: Better Together with TensorFlow
Wednesday, June 29, 2016
Posted by Heng-Tze Cheng, Senior Software Engineer, Google Research
The human brain is a sophisticated learning machine, forming rules by memorizing everyday events (“sparrows can fly” and “pigeons can fly”) and generalizing those learnings to apply to things we haven't seen before (“animals with wings can fly”). Perhaps more powerfully, memorization also allows us to further refine our generalized rules with exceptions (“penguins can't fly”). As we were exploring how to advance machine intelligence, we asked ourselves the question—can we teach computers to learn like humans do, by combining the power of memorization and generalization?
It's not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it
Wide & Deep Learning
. It's useful for generic large-scale regression and classification problems with sparse inputs (
categorical features
with a large number of possible feature values), such as recommender systems, search, and ranking problems.
Today we’re open-sourcing our implementation of Wide & Deep Learning as part of the
TF.Learn API
so that you can easily train a model yourself. Please check out the TensorFlow tutorials on
Linear Models
and
Wide & Deep Learning
, as well as our
research paper
to learn more.
How Wide & Deep Learning works.
Let's say one day you wake up with an idea for a new app called
FoodIO
*
. A user of the app just needs to say out loud what kind of food he/she is craving for (the
query
). The app magically predicts the dish that the user will like best, and the dish gets delivered to the user's front door (the
item
). Your key metric is consumption rate—if a dish was eaten by the user, the score is 1; otherwise it's 0 (the
label
).
You come up with some simple rules to start, like returning the items that match the most characters in the query, and you release the first version of FoodIO. Unfortunately, you find that the consumption rate is pretty low because the matches are too crude to be really useful (people shouting “fried chicken” end up getting “chicken fried rice”), so you decide to add machine learning to learn from the data.
The Wide model.
In the 2nd version, you want to memorize what items work the best for each query. So, you train a linear model in TensorFlow with a
wide
set of cross-product feature transformations to capture how the co-occurrence of a query-item feature pair correlates with the target label (whether or not an item is consumed). The model predicts the probability of consumption P(consumption | query, item) for each item, and FoodIO delivers the top item with the highest predicted consumption rate. For example, the model learns that feature
AND(query="fried chicken", item="chicken and waffles")
is a huge win, while
AND(query="fried chicken", item="chicken fried rice")
doesn't get as much love even though the character match is higher. In other words, FoodIO 2.0 does a pretty good job
memorizing
what users like, and it starts to get more traction.
The Deep model.
Later on you discover that many users are saying that they're tired of the recommendations. They're eager to discover similar but different cuisines with a “surprise me” state of mind. So you brush up on your TensorFlow toolkit again and train a
deep
feed-forward neural network for FoodIO 3.0. With your deep model, you're learning lower-dimensional dense representations (usually called embedding vectors) for every query and item. With that, FoodIO is able to
generalize
by matching items to queries that are close to each other in the embedding space. For example, you find that people who asked for “fried chicken” often don't mind having “burgers” as well.
Combining Wide and Deep models.
However, you discover that the deep neural network sometimes generalizes too much and recommends irrelevant dishes. You dig into the historic traffic, and find that there are actually two distinct types of query-item relationships in the data.
The first type of queries is very targeted. People shouting very specific items like “iced decaf latte with nonfat milk” really mean it. Just because it's pretty close to “hot latte with whole milk” in the embedding space doesn't mean it's an acceptable alternative. And there are millions of these rules where the transitivity of embeddings may actually do more harm than good. On the other hand, queries that are more exploratory like “seafood” or “italian food” may be open to more generalization and discovering a diverse set of related items. Having realized these, you have an epiphany: Why do I have to choose either wide or deep models? Why not both?
Finally, you build FoodIO 4.0 with Wide & Deep Learning in TensorFlow. As shown in the graph above, the sparse features like
query="fried chicken"
and
item="chicken fried rice"
are used in both the wide part (left) and the deep part (right) of the model. During training, the prediction errors are backpropagated to both sides to train the model parameters. The cross-feature transformation in the wide model component can memorize all those sparse, specific rules, while the deep model component can generalize to similar items via embeddings.
Wider. Deeper. Together.
We're excited to share the TensorFlow API and implementation of Wide & Deep Learning with you, so you can try out your ideas with it and share your findings with everyone else. To get started, check out the code on
GitHub
and our TensorFlow tutorials on
Linear Models
and
Wide & Deep Learning
.
Acknowledgement
Bringing Wide & Deep from idea, research to implementation has been a huge team effort. We'd to like to thank all the people who have contributed to the project or have given us advice, including: Heng-Tze Cheng, Mustafa Ispir, Zakaria Haque, Lichan Hong, Rohan Anil, Denis Baylor, Vihan Jain, Salem Haykal, Robson Araujo, Xiaobing Liu, Yonghui Wu, Thomas Strohmann, Tal Shaked, Jeremiah Harmsen, Greg Corrado, Glen Anderson, D. Sculley, Tushar Chandra, Ed Chi, Rajat Monga, Rob von Behren, Jarek Wilkiewicz, Christine Robson, Illia Polosukhin, Martin Wicke, Gus Katsiapis, Alexandre Passos, Olivier Chapelle, Levent Koc, Akshay Naresh Modi, Wei Chai, Hrishi Aradhye, Othar Hansson, Xinran He, Martin Zinkevich, Joe Toth, Anton Rusanov, Hemal Shah, Petros Mol, Frank Li, Yutaka Suematsu, Sameer Ahuja, Eugene Brevdo, Philip Tucker, Shanqing Cai, Kester Tong, and more.
*
For illustration only. FoodIO is not a real app.
↩
Bringing Precision to the AI Safety Discussion
Tuesday, June 21, 2016
Posted by Chris Olah, Google Research
We believe that AI technologies are likely to be overwhelmingly useful and beneficial for humanity. But part of being a responsible steward of any new technology is thinking through potential challenges and how best to address any associated risks. So today we’re publishing a technical paper,
Concrete Problems in AI Safety
, a collaboration among scientists at Google, OpenAI, Stanford and Berkeley.
While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative. We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.
We’ve outlined five problems we think will be very important as we apply AI in more general circumstances. These are all forward thinking, long-term research questions -- minor issues today, but important to address for future systems:
Avoiding Negative Side Effects:
How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
Avoiding Reward Hacking:
How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
Scalable Oversight:
How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
Safe Exploration:
How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
Robustness to Distributional Shift:
How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.
We go into more technical detail
in the paper
. The machine learning research community has already thought quite a bit about most of these problems and many related issues, but we think there’s a lot more work to be done.
We believe in rigorous, open, cross-institution work on how to build machine learning systems that work as intended. We’re eager to continue our collaborations with other research groups to make positive progress on AI.
Research at Google and ICLR 2016
Sunday, May 01, 2016
Posted by Dumitru Erhan, Gentleman Scientist
This week, San Juan, Puerto Rico hosts the
4th International Conference on Learning Representations
(ICLR 2016), a conference focused on how one can learn meaningful and useful representations of data for
Machine Learning
. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.
At the forefront of innovation in cutting-edge technology in
Neural Networks
and
Deep Learning
, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2016, Google will have a strong presence with over 40 researchers attending (many from the
Google Brain team
and
Google DeepMind
), contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.
If you are attending ICLR 2016, we hope you’ll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2016 in the list below (Googlers highlighted in
blue
).
Organizing Committee
Program Chairs
Samy Bengio
, Brian Kingsbury
Area Chairs include:
John Platt
,
Tara Sanaith
Oral Sessions
Neural Programmer-Interpreters
(Best Paper Award Recipient)
Scott Reed,
Nando de Freitas
Net2Net: Accelerating Learning via Knowledge Transfer
Tianqi Chen,
Ian Goodfellow
,
Jon Shlens
Conference Track Posters
Prioritized Experience Replay
Tom Schau
,
John Quan
,
Ioannis Antonoglou
,
David Silver
Reasoning about Entailment with Neural Attention
Tim Rocktäschel,
Edward Grefenstette
,
Karl Moritz Hermann
,
Tomáš Kočiský
,
Phil Blunsom
Neural Programmer: Inducing Latent Programs With Gradient Descent
Arvind Neelakantan,
Quoc Le
,
Ilya Sutskever
MuProp: Unbiased Backpropagation For Stochastic Neural Networks
Shixiang Gu,
Sergey Levine
,
Ilya Sutskever
,
Andriy Mnih
Multi-Task Sequence to Sequence Learning
Minh-Thang Luong,
Quoc Le
,
Ilya Sutskever
,
Oriol Vinyals
,
Lukasz Kaiser
A Test of Relative Similarity for Model Selection in Generative Models
Eugene Belilovsky, Wacha Bounliphone, Matthew Blaschko,
Ioannis Antonoglou
, Arthur Gretton
Continuous control with deep reinforcement learning
Timothy Lillicrap
,
Jonathan Hunt
,
Alexander Pritzel
,
Nicolas Heess
,
Tom Erez
,
Yuval Tassa
,
David Silver
,
Daan Wierstra
Policy Distillation
Andrei Rusu
,
Sergio Gomez
,
Caglar Gulcehre,
Guillaume Desjardins
,
James Kirkpatrick
,
Razvan Pascanu
,
Volodymyr Mnih
,
Koray Kavukcuoglu
,
Raia Hadsell
Neural Random-Access Machines
Karol Kurach
,
Marcin Andrychowicz
,
Ilya Sutskever
Variable Rate Image Compression with Recurrent Neural Networks
George Toderici
,
Sean O'Malley
,
Damien Vincent
,
Sung Jin Hwang
,
Michele Covell
,
Shumeet Baluja
,
Rahul Sukthankar
,
David Minnen
Order Matters: Sequence to Sequence for Sets
Oriol Vinyals
,
Samy Bengio
,
Manjunath Kudlur
Grid Long Short-Term Memory
Nal Kalchbrenner
,
Alex Graves
,
Ivo Danihelka
Neural GPUs Learn Algorithms
Lukasz Kaiser
,
Ilya Sutskever
ACDC: A Structured Efficient Linear Layer
Marcin Moczulski,
Misha Denil
, Jeremy Appleyard,
Nando de Freitas
Workshop Track Posters
Revisiting Distributed Synchronous SGD
Jianmin Chen
,
Rajat Monga
,
Samy Bengio
,
Rafal Jozefowicz
Black Box Variational Inference for State Space Models
Evan Archer, Il Memming Park,
Lars Buesing
, John Cunningham, Liam Paninski
A Minimalistic Approach to Sum-Product Network Learning for Real Applications
Viktoriya Krakovna,
Moshe Looks
Efficient Inference in Occlusion-Aware Generative Models of Images
Jonathan Huang
,
Kevin Murphy
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
,
Sergey Ioffe
,
Vincent Vanhoucke
Deep Autoresolution Networks
Gabriel Pereyra
,
Christian Szegedy
Learning visual groups from co-occurrences in space and time
Phillip Isola, Daniel Zoran,
Dilip Krishnan
, Edward H. Adelson
Adding Gradient Noise Improves Learning For Very Deep Networks
Arvind Neelakantan, Luke Vilnis,
Quoc V. Le
,
Ilya Sutskever
,
Lukasz Kaiser
,
Karol Kurach
, James Martens
Adversarial Autoencoders
Alireza Makhzani,
Jonathon Shlens
,
Navdeep Jaitly
,
Ian Goodfellow
Generating Sentences from a Continuous Space
Samuel R. Bowman, Luke Vilnis,
Oriol Vinyals
,
Andrew M. Dai
,
Rafal Jozefowicz
,
Samy Bengio
Announcing TensorFlow 0.8 – now with distributed computing support!
Wednesday, April 13, 2016
Posted by Derek Murray, Software Engineer
Google uses machine learning across a wide range of its products. In order to continually improve our models, it's crucial that the training process be as fast as possible. One way to do this is to run
TensorFlow
across hundreds of machines, which shortens the training process for some models from weeks to hours, and allows us to experiment with models of increasing size and sophistication. Ever since we released TensorFlow as an open-source project, distributed training support has been one of the most requested features. Now the wait is over.
Today, we're excited to release TensorFlow 0.8 with distributed computing support, including everything you need to train distributed models on your own infrastructure. Distributed TensorFlow is powered by the high-performance
gRPC
library, which supports training on hundreds of machines in parallel. It complements our recent announcement of
Google Cloud Machine Learning
, which enables you to train and serve your TensorFlow models using the power of the Google Cloud Platform.
To coincide with the TensorFlow 0.8 release, we have published a
distributed trainer
for the I
nception image classification
neural network in the TensorFlow models repository. Using the distributed trainer, we trained the Inception network to 78% accuracy in less than 65 hours using 100 GPUs. Even small clusters—or a couple of machines under your desk—can benefit from distributed TensorFlow, since adding more GPUs improves the overall throughput, and produces accurate results sooner.
TensorFlow can speed up Inception training by a factor of 56, using 100 GPUs.
The distributed trainer also enables you to scale out training using a cluster management system like
Kubernetes
. Furthermore, once you have trained your model, you can deploy to production and
speed up inference using TensorFlow Serving on Kubernetes
.
Beyond distributed Inception, the 0.8 release includes
new libraries
for defining your own distributed models. TensorFlow's distributed architecture permits a great deal of flexibility in defining your model, because every process in the cluster can perform general-purpose computation. Our previous system
DistBelief
(like many systems that have followed it) used special "parameter servers" to manage the shared model parameters, where the parameter servers had a simple read/write interface for fetching and updating shared parameters. In TensorFlow, all computation—including parameter management—is represented in the dataflow graph, and the system maps the graph onto heterogeneous devices (like multi-core CPUs, general-purpose GPUs, and mobile processors) in the available processes. To make TensorFlow easier to use, we have included Python libraries that make it easy to write a model that runs on a single process and scales to use multiple replicas for training.
This architecture makes it easier to scale a single-process job up to use a cluster, and also to experiment with novel architectures for distributed training. As an example, my colleagues have recently shown that
synchronous SGD with backup workers
, implemented in the TensorFlow graph, achieves improved time-to-accuracy for image model training.
The current version of distributed computing support in TensorFlow is just the start. We are continuing to research ways of improving the performance of distributed training—both through engineering and algorithmic improvements—and will share these improvements with the community
on GitHub
. However, getting to this point would not have been possible without help from the following people:
TensorFlow training libraries
- Jianmin Chen, Matthieu Devin, Sherry Moore and Sergio Guadarrama
TensorFlow core
- Zhifeng Chen, Manjunath Kudlur and Vijay Vasudevan
Testing
- Shanqing Cai
Inception model architecture
- Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Jonathon Shlens and Zbigniew Wojna
Project management
- Amy McDonald Sandjideh
Engineering leadership
- Jeff Dean and Rajat Monga
Machine Learning in the Cloud, with TensorFlow
Wednesday, March 23, 2016
Posted by Slaven Bilac, Software Engineer, Google Research
At Google, researchers collaborate closely with product teams, applying the latest advances in Machine Learning to existing products and services - such as
speech recognition in the Google app
,
search in Google Photos
and the
Smart Reply feature in Inbox by Gmail
- in order to make them more useful. A growing number of Google products are using
TensorFlow
, our open source Machine Learning system, to tackle ML challenges and we would like to enable others do the same.
Today, at
GCP NEXT 2016
, we
announced the alpha release
of
Cloud Machine Learning
, a framework for building and training custom models to be used in intelligent applications.
Machine Learning projects can come in many sizes, and as we’ve seen with our open source offering
TensorFlow
, projects often need to scale up. Some small tasks are best handled with a local solution running on one’s desktop, while large scale applications require both the scale and dependability of a hosted solution. Google
Cloud Machine Learning
aims to support the full range and provide a seamless transition from local to cloud environment.
The
Cloud Machine Learning
offering allows users to run custom distributed learning algorithms based on
TensorFlow
. In addition to the
deep learning
capabilities that power
Cloud Translate API
,
Cloud Vision API
, and
Cloud Speech API
, we provide easy-to-adopt samples for common tasks like linear regression/classification with very fast convergence properties (based on the
SDCA
algorithm) and building a custom image classification model with few hundred training examples (based on the
DeCAF
algorithm).
We are excited to bring the best of
Google Research
to
Google Cloud Platform
. Learn more about this release and more from GCP Next 2016 on the
Google Cloud Platform blog
.
Train your own image classifier with Inception in TensorFlow
Wednesday, March 09, 2016
Posted by Jon Shlens, Senior Research Scientist
At the end of last year we released code that allows a user
to classify images with TensorFlow
models. This code demonstrated how to build an image classification system by employing a deep learning model that we had previously trained. This model was known to classify an image across 1000 categories supplied by the
ImageNet
academic competition with an error rate that approached
human performance
. After all, what self-respecting computer vision system would fail to recognize a cute puppy?
Image via
Wikipedia
Well, thankfully the image classification model would recognize this image as a
retriever
with 79.3% confidence. But, more spectacularly, it would also be able to distinguish between a
spotted salamander
and
fire salamander
with high confidence – a task that might be quite difficult for those not experts in
herpetology
. Can
you
tell the difference?
Images via
Wikipedia
The deep learning model we released,
Inception-v3
, is described in our Arxiv preprint "
Rethinking the Inception Architecture for Computer Vision
” and can be visualized with this schematic diagram:
Schematic diagram of Inception-v3
As described in the preprint, this model achieves 5.64% top-5 error while an ensemble of four of these models achieves 3.58% top-5 error on the validation set of the ImageNet whole image
ILSVRC 2012
classification task. Furthermore, in the
2015 ImageNet Challenge
, an ensemble of 4 of these models came in 2nd in the image classification task.
After the release of this model, many people in the TensorFlow community voiced their preference on having an Inception-v3 model that they can train themselves, rather than using our pre-trained model. We could not agree more, since a system for training an Inception-v3 model provides many opportunities, including:
Exploration of different variants of this model architecture in order to improve the image classification system.
Comparison of optimization algorithms and hardware setups for training this model faster or to a higher degree of predictive performance.
Retraining/fine-tuning the Inception-v3 model on a distinct image classification task or as a component of a larger network tasked with object detection or multi-modal learning.
The last topic is often referred to as
transfer learning
, and has been an area of particular excitement in the field of deep networks in the context of vision. A common prescription to a computer vision problem is to first train an image classification model with the ImageNet Challenge data set, and then transfer this model’s knowledge to a distinct task. This has been done for
object detection
,
zero-shot learning
,
image captioning
,
video analysis
and multitudes of other applications.
Today we are happy to announce that
we are releasing libraries and code for training
Inception-v3
on one or multiple GPU’s. Some features of this code include:
Training an Inception-v3 model with synchronous updates across multiple GPUs.
Employing batch normalization to speed up training of the model.
Leveraging many distortions of the image to augment model training.
Releasing a new (still experimental) high-level language for specifying complex model architectures, which we call
TensorFlow-Slim
.
Demonstrating how to perform transfer learning by taking a pre-trained Inception-v3 model and fine-tuning it for another task.
We can train a model from scratch to its best performance on a desktop with 8 NVIDIA Tesla K40s in about 2 weeks. In order to make research progress faster, we are additionally supplying a new version of a pre-trained Inception-v3 model that is ready to be fine-tuned or adapted to a new task. We demonstrate how to use this model for transfer learning on a simple flower classification task. Hopefully, this provides a useful didactic example for employing this Inception model on wide range of vision tasks.
Want to get started? See the accompanying
instructions
on how to
train
,
evaluate
or
fine-tune
a network.
Releasing this code has been a huge team effort. These efforts have taken several months with contributions from many individuals spanning research at Google. We wish to especially acknowledge the following people who contributed to this project:
Model Architecture
– Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Jon Shlens and Zbigniew Wojna
Systems Infrastructure
– Sherry Moore, Martin Wicke, David Andersen, Matthieu Devin, Manjunath Kudlur and Nishant Patil
TensorFlow-Slim
– Sergio Guadarrama and Nathan Silberman
Model Visualization
– Fernanda Viégas, Martin Wattenberg and James Wexler
Deep Learning for Robots: Learning from Large-Scale Interaction
Tuesday, March 08, 2016
Posted by Sergey Levine, Research Scientist
While we’ve recently seen great strides in robotic capability, the gap between human and robot motor skills remains vast. Machines still have a very long way to go to match human proficiency even at basic sensorimotor skills like grasping. However, by linking learning with continuous feedback and control, we might begin to bridge that gap, and in so doing make it possible for robots to intelligently and reliably handle the complexities of the real world.
Consider for example
this robot
from
KAIST
, which won last year’s
DARPA robotics challenge
. The remarkably precise and deliberate motions are deeply impressive. But they are also quite… robotic. Why is that? What makes robot behavior so distinctly robotic compared to human behavior? At a high level, current robots typically follow a sense-plan-act paradigm, where the robot observes the world around it, formulates an internal model, constructs a plan of action, and then executes this plan. This approach is modular and often effective, but tends to break down in the kinds of cluttered natural environments that are typical of the real world. Here, perception is imprecise, all models are wrong in some way, and no plan survives first contact with reality.
In contrast, humans and animals move quickly, reflexively, and often with remarkably little advance planning, by relying on highly developed and intelligent feedback mechanisms that use sensory cues to correct mistakes and compensate for perturbations. For example, when serving a tennis ball, the player continually observes the ball and the racket, adjusting the motion of his hand so that they meet in the air. This kind of feedback is fast, efficient, and, crucially, can correct for mistakes or unexpected perturbations. Can we train robots to reliably handle complex real-world situations by using similar feedback mechanisms to handle perturbations and correct mistakes?
While servoing and feedback control have been studied extensively in robotics, the question of how to define the right sensory cue remains exceptionally challenging, especially for rich modalities such as vision. So instead of choosing the cues by hand, we can program a robot to acquire them on its own from scratch, by learning from extensive experience in the real world. In our first experiments with real physical robots, we decided to tackle robotic grasping in clutter.
A human child is able to reliably grasp objects after one year, and takes around four years to acquire more sophisticated precision grasps. However, networked robots can instantaneously share their experience with one another, so if we dedicate 14 separate robots to the job of learning grasping in parallel, we can acquire the necessary experience much faster. Below is a video of our robots practicing grasping a range of common office and household objects:
While initially the grasps are executed at random and succeed only rarely, each day the latest experiences are used to train a deep
convolutional neural network
(CNN) to learn to predict the outcome of a grasp, given a camera image and a potential motor command. This CNN is then deployed on the robots the following day, in the inner loop of a servoing mechanism that continually adjusts the robot’s motion to maximize the predicted chance of a successful grasp. In essence, the robot is constantly predicting, by observing the motion of its own hand, which kind of subsequent motion will maximize its chances of success. The result is continuous feedback: what we might call hand-eye coordination. Observing the behavior of the robot after over 800,000 grasp attempts, which is equivalent to about 3000 robot-hours of practice, we can see the beginnings of intelligent reactive behaviors. The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group. All of these behaviors emerged naturally from learning, rather than being programmed into the system.
To evaluate whether the system achieves measurable benefit from continuous feedback, we can compare its performance to an open-loop baseline that closer resembles the perception-planning-action loop described previously, albeit with a learned CNN used to determine both the open-loop grasps and the closed-loop servoing trained on the same data. This approach is most similar to
recent work by Pinto and Gupta
. With open-loop grasp selection, the robot chooses a single grasp pose from a single image, and then blindly executes this grasp. This method has a 34% average failure rate on the first 30 picking attempts for this set of office objects:
Incorporating continuous feedback into the system reduces the failures by nearly half, down to 18% from 34%, and produces interesting corrections and adjustments:
Neural networks have made great strides in allowing us to build computer programs that can process images, speech, text, and even draw pictures. However, introducing actions and control adds considerable new challenges, since every decision the network makes will affect what it sees next. Overcoming these challenges will bring us closer to building systems that understand the effects of their actions in the world. If we can bring the power of large-scale machine learning to robotic control, perhaps we will come one step closer to solving fundamental problems in robotics and automation.
The research on robotic hand-eye coordination and grasping was conducted by Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen, with special thanks to colleagues at Google Research and X who've contributed their expertise and time to this research. An early preprint is
available on arXiv
.
Exploring the Intersection of Art and Machine Intelligence
Monday, February 22, 2016
Posted by Mike Tyka, Software Engineer
In June of last year, we
published a story
about a visualization technique that helped to understand how neural networks carried out difficult visual classification tasks. In addition to helping us gain a deeper understanding of how NNs worked, these techniques also produced
strange, wonderful and oddly compelling images
.
Following that blog post, and especially after
we released the source code
, dubbed DeepDream, we
witnessed a tremendous interest
not only from the machine learning community but also from the creative coding community. Additionally, several artists such as
Amanda Peterson
(aka Gucky),
Memo Akten
,
Samim Winiger
,
Kyle McDonald
,
Gene Kogan
and many others immediately started experimenting with the technique as a new way to create art.
“
GCHQ
”, 2015, Memo Akten, used with permission.
Soon after, the paper
A Neural Algorithm of Artistic Style
by Leon Gatys in Tuebingen was released. Their technique used a convolutional neural network to factor images into their separate style and content components. This in turn allowed the creation, by using a neural network as a generic image parser, of new images that combined the style of one with the content of another. Once again it took the creative coding community by storm and immediately many artists and coders began
experimenting
with the new algorithm, resulting in
Twitter bots
and other
explorations
and
experiments
.
The
style transfer algorithm
crosses a photo with a painting style; for example
Neil deGrasse Tyson
in the style of
Kadinsky’s Jane Rouge Bleu
. Photo by
Guillaume Piolle
, used with permission.
The open-source deep-learning community, especially projects such as
GitXiv
, hugely contributed to the spread, accessibility and development of these algorithms. Both DeepDream and style transfer were rapidly implemented in a plethora of different languages and deep learning packages. Immediately others took the techniques and developed them
further
.
“Saxophone dreams” - Mike Tyka.
With machine learning as field moving forward at a breakneck pace and rapidly becoming part of many -- if not most -- online products, the opportunities for artistic uses are as wide as they are unexplored and perhaps overlooked. However the interest is growing rapidly: the University of London is now offering a course on
Machine learning and art
. NYU ITP offers
a similar program
this year. The Tate Modern’s IK Prize 2016 topic:
Artificial Intelligence
.
These are exciting early days, and we want to continue to stimulate artistic interest in these emerging technologies. To that end, we are announcing a two day DeepDream event in San Francisco at the
Gray Area Foundation for the Arts
, aimed at showcasing some of the latest exploration of the intersection of Machine Intelligence and Art, and spurring discussion focused around future directions:
Friday Feb 26th:
DeepDream: The Art of Neural Networks
, an exhibit consisting of 29 neural network generated artworks, created by artists at Google and from around the world. The works will be auctioned, with all proceeds going to the Gray Area Foundation, which has been active in supporting the intersection between arts and technology for over 10 years.
On Saturday Feb 27th
:
Art and Machine Learning Symposium
, an open one-day symposium on Machine Learning and Art, aiming to bring together the neural network and the creative coding communities to exchange ideas, learn and discuss. Videos of all the talks will be posted online after the event.
We look forward to sharing some of the interesting works of art generated by the art and machine learning community, and being part of the discussion of how art and technology can be combined.
Running your models in production with TensorFlow Serving
Tuesday, February 16, 2016
Posted by Noah Fiedel, Software Engineer
Machine learning powers many Google product features, from
speech recognition in the Google app
to
Smart Reply in Inbox
to
search in Google Photos
. While decades of experience have enabled the software industry to establish best practices for building and supporting products, doing so for services based upon machine learning introduces
new and interesting challenges
.
Today, we announce the release of
TensorFlow Serving
, designed to address some of these challenges. TensorFlow Serving is a high performance, open source serving system for machine learning models, designed for production environments and optimized for
TensorFlow
.
TensorFlow Serving
is ideal for running multiple models, at large scale, that change over time based on real-world data, enabling:
model lifecycle management
experiments with multiple algorithms
efficient use of GPU resources
TensorFlow Serving makes the process of taking a model into production easier and faster. It allows you to safely deploy new models and
run experiments
while keeping the same server architecture and APIs. Out of the box it provides integration with TensorFlow, but it can be extended to serve other types of models.
Here’s how it works. In the simplified, supervised training pipeline shown below, training data is fed to the learner, which outputs a model:
Once a new model version becomes available, upon
validation
, it is ready to be deployed to the serving system, as shown below.
TensorFlow Serving uses the (previously trained) model to perform inference - predictions based on new data presented by its clients. Since clients typically communicate with the serving system using a
remote procedure call
(RPC) interface, TensorFlow Serving comes with a reference front-end implementation based on
gRPC
, a high performance, open source RPC framework from Google.
It is quite common to launch and iterate on your model over time, as new data becomes available, or as you improve the model. In fact, at Google, many pipelines run continuously, producing new model versions as new data becomes available.
TensorFlow Serving is written in C++ and it supports Linux. TensorFlow Serving introduces minimal overhead. In our benchmarks we recoded ~100,000 queries per second (QPS) per core on a 16 vCPU Intel Xeon E5 2.6 GHz
machine
, excluding gRPC and the TensorFlow inference processing time.
We are excited to share this important component of TensorFlow today under the Apache 2.0 open source license. We would love to hear your
questions
and
feature requests
on Stack Overflow and GitHub respectively. To get started quickly, clone the code from
github.com/tensorflow/serving
and check out this
tutorial
.
You can expect to keep hearing more about TensorFlow as we continue to develop what we believe to be one of the best machine learning toolboxes in the world. If you'd like to stay up to date, follow
@googleresearch
or
+ResearchatGoogle
, and keep an eye out for
Jeff Dean
's keynote address at
GCP Next 2016
in March.
Teach Yourself Deep Learning with TensorFlow and Udacity
Thursday, January 21, 2016
Posted by Vincent Vanhoucke, Principal Research Scientist
Deep learning
has become one of the hottest topics in machine learning in recent years. With
TensorFlow
, the deep learning platform that we
recently released
as an open-source project, our goal was to bring the capabilities of deep learning to everyone. So far, we are extremely excited by the uptake: more than 4000 users have forked it on
GitHub
in just a few weeks, and the project has been starred more than 16000 times by
enthusiasts around the globe
.
To help make deep learning even more accessible to engineers and data scientists at large, we are launching a new
Deep Learning Course
developed in collaboration with
Udacity
. This short, intensive course provides you with all the basic tools and vocabulary to get started with deep learning, and walks you through how to use it to address some of the most common machine learning problems. It is also accompanied by interactive TensorFlow notebooks that directly mirror and implement the concepts introduced in the lectures.
The course consists of four lectures which provide a tour of the main building blocks that are used to solve problems ranging from image recognition to text analysis. The first lecture focuses on the basics that will be familiar to those already versed in machine learning: setting up your data and experimental protocol, and training simple classification models. The second lecture builds on these fundamentals to explore how these simple models can be made deeper, and more powerful, and explores all the scalability problems that come with that, in particular regularization and hyperparameter tuning. The third lecture is all about
convolutional networks
and image recognition. The fourth and final lecture explore models for text and sequences in general, with embeddings and
recurrent neural networks
. By the end of the course, you will have implemented and trained this variety of models on your own machine and will be ready to transfer that knowledge to solve your own problems!
Our overall goal in designing this course was to provide the machine learning enthusiast a rapid and direct path to solving real and interesting problems with deep learning techniques, and we're now very excited to share what we've built! It has been a lot of fun putting together with the fantastic team of experts in online course design and production at Udacity. For more details, see the
Udacity blog post
, and
register for the course
. We hope you enjoy it!
How to Classify Images with TensorFlow
Monday, December 07, 2015
Posted by Pete Warden, Software Engineer
Prior to joining Google, I spent a lot of time trying to get computers to recognize objects in images. At
Jetpac
my colleagues and I built mustache detectors to recognize bars full of hipsters, blue sky detectors to find pubs with beer gardens, and dog detectors to spot canine-friendly cafes. At first, we used the traditional computer vision approaches that I'd used my whole career, writing a big ball of custom logic to laboriously recognize one object at a time. For example, to spot sky I'd first run a color detection filter over the whole image looking for shades of blue, and then look at the upper third. If it was mostly blue, and the lower portion of the image wasn't, then I'd classify that as probably a photo of the outdoors.
I'd been an engineer working on vision problems since the late 90's, and the sad truth was that unless you had a research team and plenty of time behind you, this sort of hand-tailored hack was the only way to get usable results. As you can imagine, the results were far from perfect and each detector I wrote was a custom job, and didn't help me with the next thing I needed to recognize. This probably seems laughable to anybody who didn't work in computer vision in the recent past! It's such a primitive way of solving the problem, it sounds like it should have been superseded long ago.
That's why I was so excited when I started to play around with
deep learning
. It became clear as I tried them out that the latest approaches using convolutional neural networks were producing far better results than my hand-tuned code on similar problems. Not only that, the process of training a detector for a new class of object was much easier. I didn't have to think about what features to detect, I'd just supply a network with new training examples and it would take it from there.
Those experiences converted me into a deep learning enthusiast, and so when Jetpac was acquired and I had the chance to join Google and work with many of the stars of the field, I couldn't resist. What impressed me more than anything was the team's willingness to share their knowledge with the rest of the world.
I'm especially happy that we've just managed to release
TensorFlow
, our internal machine learning framework, because it gives me a chance to show practical, usable examples of why I'm so convinced deep learning is an essential tool for anybody working with images, speech, or text in ML.
Given my background, my favorite first example is using a deep network to spot objects in an image. One of the early showcases for the new approach to neural networks was
an annual competition
to recognize 1,000 different classes of objects, from the
Imagenet
data set, and TensorFlow includes a pre-trained network for that task. If you look inside the examples folder in the
source code
, you'll see “
label_image
”, which is a small C++ application for using that network.
The
README
has the instructions for building TensorFlow on your machine, downloading the binary files defining the network, and compiling the sample code. Once it's all built, just run it with no arguments, and you should see a list of results showing "Military Uniform" at the top. This is running on the default image of Admiral Grace Hopper, and correctly spots her attire.
Image via
Wikipedia
After that, try pointing it at your own images using the “--image” command line flag, and you should see a set of labels for each. If you want to know more about what's going on under the hood, the C++ section of
the TensorFlow Inception tutorial
goes into a lot more detail.
The only things it will spot are those that are in the original 1,000 Imagenet classes, and it will always try to find something, which can lead to some funny results. There are no people categories, so on portraits you'll often see objects that are associated with people like seat belts or oxygen masks, or in Lincoln’s case, a bow tie!
Image via
U.S History Images
If the image is poorly lit, then “
nematode
” is usually the top pick since most training photos of those are taken in very dim surroundings. It's also not perfect in its identification, with an error rate of 5.6% for getting the right label in the top five results. However, that’s not all that bad considering Stanford’s Andrej Karpathy
found that even someone who was trained at the job could only achieve a slightly-better 5.1% error doing the same task manually
. We can do even better if we combine the outputs of four trained models into an "ensemble", with an error rate of just 3.5%.
It's unlikely that the set of labels it produces is exactly what you need for your application, so the next step would be to train your own network. That is a much bigger task than running a pre-trained one like this, but one of the things I like about TensorFlow is that it spans the whole lifecycle of a machine learning model, from experimentation, to training, and into production, as this example shows. To get started training, I'd recommend looking at
this simple tutorial on recognizing hand-drawn digits from the MNIST data set
.
I hope that sharing this framework will help developers build amazing user experiences we’d never even think of. We’ve been having a massive amount of fun with TensorFlow, and I can’t wait to see what interesting image tools you build using it!
NIPS 2015 and Machine Learning Research at Google
Sunday, December 06, 2015
Posted by Sanjiv Kumar, Research Scientist
This week, Montreal hosts the
29
th
Annual Conference on Neural Information Processing Systems
(NIPS 2015), a machine learning and computational neuroscience conference that includes invited talks, demonstrations and oral and poster presentations of some of the latest in machine learning research. Google will have a strong presence at NIPS 2015, with over 140 Googlers attending in order to contribute to and learn from the broader academic research community by presenting technical talks and posters, in addition to hosting workshops and tutorials.
Research at Google is at the forefront of innovation in Machine Intelligence, actively exploring virtually all aspects of
machine learning
including classical algorithms as well as cutting-edge techniques such as
deep learning
. Focusing on both theory as well as application, much of our work on language understanding, speech, translation, visual processing, ranking, and prediction relies on Machine Intelligence. In all of those tasks and many others, we gather large volumes of direct or indirect evidence of relationships of interest, and develop learning approaches to understand and generalize.
If you are attending NIPS 2015, we hope you’ll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at NIPS 2015 in the list below (Googlers highlighted in
blue
).
Google is a Platinum Sponsor of NIPS 2015.
PROGRAM ORGANIZERS
General Chairs
Corinna Cortes
, Neil D. Lawrence
Program Committee includes:
Samy Bengio
,
Gal Chechik
,
Ian Goodfellow
,
Shakir Mohamed
,
Ilya Sutskever
ORAL SESSIONS
Learning Theory and Algorithms for Forecasting Non-stationary Time Series
Vitaly Kuznetsov,
Mehryar Mohri
SPOTLIGHT SESSIONS
Distributed Submodular Cover: Succinctly Summarizing Massive Data
Baharan Mirzasoleiman, Amin Karbasi,
Ashwinkumar Badanidiyuru
, Andreas Krause
Spatial Transformer Networks
Max Jaderberg
,
Karen Simonyan
,
Andrew Zisserman
,
Koray Kavukcuoglu
Pointer Networks
Oriol Vinyals,
Meire Fortunato,
Navdeep Jaitly
Structured Transforms for Small-Footprint Deep Learning
Vikas Sindhwani
,
Tara Sainath
,
Sanjiv Kumar
Spherical Random Features for Polynomial Kernels
Jeffrey Pennington
,
Felix Yu
,
Sanjiv Kumar
POSTERS
Learning to Transduce with Unbounded Memory
Edward Grefenstette
,
Karl Moritz Hermann
,
Mustafa Suleyman,
Phil Blunsom
Deep Knowledge Tracing
Chris Piech, Jonathan Bassen,
Jonathan Huang,
Surya Ganguli, Mehran Sahami, Leonidas Guibas, Jascha Sohl-Dickstein
Hidden Technical Debt in Machine Learning Systems
D Sculley,
Gary Holt
,
Daniel Golovin
,
Eugene Davydov
,
Todd Phillips
,
Dietmar Ebner
,
Vinay Chaudhary
,
Michael Young
,
Jean-Francois Crespo
,
Dan Dennison
Grammar as a Foreign Language
Oriol Vinyals
,
Lukasz Kaiser
,
Terry Koo
,
Slav Petrov
,
Ilya Sutskever
,
Geoffrey Hinton
Stochastic Variational Information Maximisation
Shakir Mohamed
,
Danilo Rezende
Embedding Inference for Structured Multilabel Prediction
Farzaneh Mirzazadeh, Siamak Ravanbakhsh, Bing Xu,
Nan Ding
, Dale Schuurmans
On the Convergence of Stochastic Gradient MCMC Algorithms with High-Order Integrators
Changyou Chen,
Nan Ding
, Lawrence Carin
Spectral Norm Regularization of Orthonormal Representations for Graph Transduction
Rakesh Shivanna
, Bibaswan Chatterjee, Raman Sankaran, Chiranjib Bhattacharyya, Francis Bach
Differentially Private Learning of Structured Discrete Distributions
Ilias Diakonikolas,
Moritz Hardt
, Ludwig Schmidt
Nearly Optimal Private LASSO
Kunal Talwar
,
Li Zhang
, Abhradeep Thakurta
Learning Continuous Control Policies by Stochastic Value Gradients
Nicolas Heess
,
Greg Wayne
,
David Silver
,
Timothy Lillicrap
,
Tom Erez
,
Yuval Tassa
Gradient Estimation Using Stochastic Computation Graphs
John Schulman
,
Nicolas Heess
,
Theophane Weber
, Pieter Abbeel
Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks
Samy Bengio
,
Oriol Vinyals
,
Navdeep Jaitly
,
Noam Shazeer
Teaching Machines to Read and Comprehend
Karl Moritz Hermann
,
Tomas Kocisky
,
Edward Grefenstette
,
Lasse Espeholt
,
Will Kay
,
Mustafa Suleyman
,
Phil Blunsom
Bayesian dark knowledge
Anoop Korattikara
,
Vivek Rathod
,
Kevin Murphy
, Max Welling
Generalization in Adaptive Data Analysis and Holdout Reuse
Cynthia Dwork, Vitaly Feldman,
Moritz Hardt
, Toniann Pitassi, Omer Reingold, Aaron Roth
Semi-supervised Sequence Learning
Andrew Dai
,
Quoc Le
Natural Neural Networks
Guillaume Desjardins
,
Karen Simonyan
,
Razvan Pascanu
,
Koray Kavukcuoglu
Revenue Optimization against Strategic Buyers
Andres Munoz Medina
,
Mehryar Mohri
WORKSHOPS
Feature Extraction: Modern Questions and Challenges
Workshop Chairs include:
Dmitry Storcheus
,
Afshin Rostamizadeh,
Sanjiv Kumar
Program Committee includes:
Jeffery Pennington
,
Vikas Sindhwani
NIPS Time Series Workshop
Invited Speakers include:
Mehryar Mohri
Panelists include:
Corinna Cortes
Nonparametric Methods for Large Scale Representation Learning
Invited Speakers include:
Amr Ahmed
Machine Learning for Spoken Language Understanding and Interaction
Invited Speakers include:
Larry Heck
Adaptive Data Analysis
Organizers include:
Moritz Hardt
Deep Reinforcement Learning
Organizers include :
David Silver
Invited Speakers include:
Sergey Levine
Advances in Approximate Bayesian Inference
Organizers include :
Shakir Mohamed
Panelists include:
Danilo Rezende
Cognitive Computation: Integrating Neural and Symbolic Approaches
Invited Speakers include:
Ramanathan V. Guha
,
Geoffrey Hinton
,
Greg Wayne
Transfer and Multi-Task Learning: Trends and New Perspectives
Invited Speakers include:
Mehryar Mohri
Poster presentations include:
Andres Munoz Medina
Learning and privacy with incomplete data and weak supervision
Organizers include :
Felix Yu
Program Committee includes:
Alexander Blocker
,
Krzysztof Choromanski
,
Sanjiv Kumar
Speakers include:
Nando de Freitas
Black Box Learning and Inference
Organizers include :
Ali Eslami
Keynotes include:
Geoff Hinton
Quantum Machine Learning
Invited Speakers include:
Hartmut Neven
Bayesian Nonparametrics: The Next Generation
Invited Speakers include:
Amr Ahmed
Bayesian Optimization: Scalability and Flexibility
Organizers include:
Nando de Freitas
Reasoning, Attention, Memory (RAM)
Invited speakers include:
Alex Graves
,
Ilya Sutskever
Extreme Classification 2015: Multi-class and Multi-label Learning in Extremely Large Label Spaces
Panelists include:
Mehryar Mohri
,
Samy Bengio
Invited speakers include:
Samy Bengio
Machine Learning Systems
Invited speakers include:
Jeff Dean
SYMPOSIA
Brains, Mind and Machines
Invited Speakers include:
Geoffrey Hinton
,
Demis Hassabis
Deep Learning Symposium
Program Committee Members include:
Samy Bengio
,
Phil Blunsom
,
Nando De Freitas
,
Ilya Sutskever
,
Andrew Zisserman
Invited Speakers include:
Max Jaderberg
,
Sergey Ioffe
,
Alexander Graves
Algorithms Among Us: The Societal Impacts of Machine Learning
Panelists include:
Shane Legg
TUTORIALS
NIPS 2015 Deep Learning Tutorial
Geoffrey E. Hinton
,
Yoshua Bengio
,
Yann LeCun
Large-Scale Distributed Systems for Training Neural Networks
Jeff Dean
,
Oriol Vinyals
TensorFlow - Google’s latest machine learning system, open sourced for everyone
Monday, November 09, 2015
Posted by Jeff Dean, Senior Google Fellow, and Rajat Monga, Technical Lead
Deep Learning has had a huge impact on computer science, making it possible to explore new frontiers of research and to develop amazingly useful products that millions of people use every day. Our internal deep learning infrastructure
DistBelief
, developed in 2011, has allowed Googlers to build ever larger
neural networks
and scale training to thousands of cores in our datacenters. We’ve used it to demonstrate that
concepts like “cat”
can be learned from unlabeled YouTube images, to improve speech recognition in
the Google app
by 25%, and to build image search
in Google Photos
. DistBelief also trained the Inception model that won Imagenet’s
Large Scale Visual Recognition Challenge in 2014
, and drove our experiments in
automated image captioning
as well as
DeepDream
.
While DistBelief was very successful, it had some limitations. It was narrowly targeted to neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure - making it nearly impossible to share research code externally.
Today we’re proud to announce the open source release of
TensorFlow
-- our second-generation machine learning system, specifically designed to correct these shortcomings. TensorFlow is general, flexible, portable, easy-to-use, and completely open source. We added all this while improving upon DistBelief’s speed, scalability, and production readiness -- in fact, on some benchmarks, TensorFlow is twice as fast as DistBelief (see the
whitepaper
for details of TensorFlow’s programming model and implementation).
TensorFlow has extensive built-in support for deep learning, but is far more general than that -- any computation that you can express as a computational flow graph, you can compute with TensorFlow (see some
examples
). Any gradient-based machine learning algorithm will benefit from TensorFlow’s
auto-differentiation
and suite of first-rate optimizers. And it’s easy to express your new ideas in TensorFlow via the flexible Python interface.
Inspecting a model with TensorBoard, the visualization tool
TensorFlow is great for research, but it’s ready for use in real products too. TensorFlow was built from the ground up to be fast, portable, and ready for production service. You can move your idea seamlessly from training on your desktop GPU to running on your mobile phone. And you can get started quickly with powerful machine learning tech by using our state-of-the-art
example model architectures
. For example, we plan to release our complete, top shelf ImageNet computer vision model on TensorFlow soon.
But the most important thing about TensorFlow is that it’s yours. We’ve open-sourced TensorFlow as a standalone library and associated tools, tutorials, and examples with the Apache 2.0 license so you’re free to use TensorFlow at your institution (no matter where you work).
Our deep learning researchers all use TensorFlow in their experiments. Our engineers use it to infuse Google Search with
signals derived from deep neural networks
, and to power the
magic features of tomorrow
. We’ll continue to use TensorFlow to serve machine learning in products, and our research team is committed to sharing TensorFlow implementations of our published ideas. We hope you’ll join us at
www.tensorflow.org
.
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Android
API
App Engine
App Inventor
April Fools
Art
Audio
Australia
Automatic Speech Recognition
Awards
Cantonese
China
Chrome
Cloud Computing
Collaboration
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Genomics
Gmail
Google Books
Google Brain
Google Cloud Platform
Google Drive
Google Genomics
Google Science Fair
Google Sheets
Google Translate
Google Voice Search
Google+
Government
grants
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Klingon
Korean
Labs
Linear Optimization
localization
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
MapReduce
market algorithms
Market Research
ML
MOOC
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Ngram
NIPS
NLP
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
ph.d. fellowship
PiLab
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
SIGCOMM
SIGMOD
Site Reliability Engineering
Software
Speech
Speech Recognition
statistics
Structured Data
Systems
TensorFlow
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2016
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.