מאת - 18,289 עוקבים - 112 פוסטים - גלוי לכולם

Machine Learning, Deep Learning, Neuroscience NLP & Artificial Intelligence (AI)

Stream

Lukas Masuch

שותף באופן ציבורי  - 
 
Deep Learning - The Past, Present and Future of Artificial Intelligence

In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).

What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).

#ArtificialIntelligence #DeepLearning #MachineLearning #Future
 ·  תרגם
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and inve…
53
11
תמונת הפרופיל של Ashvin Prasadתמונת הפרופיל של JUAN O RODRIGUEZ
5 תגובות
 
its for true
 ·  תרגם
הוסף תגובה...

Lukas Masuch

שותף באופן ציבורי  - 
 
How to visualize high-dimensional data effectively with t-SNE

A popular method for exploring high-dimensional data is something called t-SNE, introduced by van der Maaten and Hinton in 2008. The technique has become widespread in the field of machine learning, since it has an almost magical ability to create compelling two-dimensonal “maps” from data with hundreds or even thousands of dimensions. The goal is to take a set of points in a high-dimensional space and find a faithful representation of those points in a lower-dimensional space, typically the 2D plane. The algorithm is non-linear and adapts to the underlying data, performing different transformations on different regions .Although impressive, these images can be tempting to misread. Those differences can be a major source of confusion. The purpose of this article is to prevent some common misreadings.

#TSNE #MachineLearning #Google
 ·  תרגם
Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading.
27
11
הוסף תגובה...

Lukas Masuch

שותף באופן ציבורי  - 
 
DeepMind introduces the Differentiable Neural Computer - a hybrid learning machine combining neural networks with read-write memory

In a recent Nature paper, Google DeepMind showed how neural networks and memory systems can be combined to make learning machines that can store knowledge quickly and reason about it flexibly. These models, which we call differentiable neural computers (DNCs), can learn from examples like neural networks, but they can also store complex data like computers. Differentiable neural computers learn how to use memory and how to produce answers completely from scratch. They learn to do so using the magic of optimisation: when a DNC produces an answer, we compare the answer to a desired correct answer. Over time, the controller learns to produce answers that are closer and closer to the correct answer. In the process, it figures out how to use its memory.

DeepMind wanted machines that could learn to form and navigate complex data structures on their own. At the heart of a DNC is a neural network called a controller, which is analogous to the processor in a computer. A controller is responsible for taking input in, reading from and writing to memory, and producing output that can be interpreted as an answer. The memory is a set of locations that can each store a vector of information.

A controller can perform several operations on memory. At every tick of a clock, it chooses whether to write to memory or not. If it chooses to write, it can choose to store information at a new, unused location or at a location that already contains information the controller is searching for. This allows the controller to update what is stored at a location. If all the locations in memory are used up, the controller can decide to free locations, much like how a computer can reallocate memory that is no longer needed. When the controller does write, it sends a vector of information to the chosen location in memory. Every time information is written, the locations are connected by links of association, which represent the order in which information was stored. As well as writing, the controller can read from multiple locations in memory. Memory can be searched based on the content of each location, or the associative temporal links can be followed forward and backward to recall information written in sequence or in reverse. The read out information can be used to produce answers to questions or actions to take in an environment. Together, these operations give DNCs the ability to make choices about how they allocate memory, store information in memory, and easily find it once there.

This learning machine is able, without prior programming, to organise information into connected facts and use those facts to solve problems.

Paper: http://rdcu.be/kXhV

#DeepMind #Google #Nature #DeepLearning #DNC
 ·  תרגם
In a study in Nature, we introduce a form of memory-augmented neural network called a differentiable neural computer, and show that it can learn to use its memory to answer questions about complex, structured data, including artificially generated stories, family trees, and even a map of the London Underground. We also show that it can solve a block puzzle game using reinforcement learning.
24
6
תמונת הפרופיל של Dave Clineתמונת הפרופיל של Gerallt Franke
2 תגובות
 
I wonder what the performance is like for node and branch changes? ;Vectors and edges
 ·  תרגם
הוסף תגובה...

Lukas Masuch

שותף באופן ציבורי  - 
 
Image Compression with Neural Networks

In this project, Google researchers expand on previous research on data compression using neural networks, exploring whether machine learning can provide better results for image compression. They introduce an architecture that uses a new variant of the Gated Recurrent Unit (a type of RNN that allows units to save activations and process sequences) called Residual Gated Recurrent Unit (Residual GRU). A Residual GRU combines existing GRUs with the residual connections introduced in "Deep Residual Learning for Image Recognition" to achieve significant image quality gains for a given compression rate. Instead of using a DCT to generate a new bit representation like many compression schemes in use today, two sets of neural networks are trained - one to create the codes from the image (encoder) and another to create the image from the codes (decoder). This is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.

Paper: https://arxiv.org/abs/1608.05148

#Google #DeepLearning #MachineLearning #ImageCompression
 ·  תרגם
14
1
תמונת הפרופיל של Mykalah Saldivarתמונת הפרופיל של Leeroy C
2 תגובות
 
Fuck off google. Do your own work.
 ·  תרגם
הוסף תגובה...

Lukas Masuch

שותף באופן ציבורי  - 
 
Neural Photo Editor - A simple interface for editing natural photos with generative neural networks

This paper presents the Neural Photo Editor, an interface for exploring the latent space of generative image models and making large, semantically coherent changes to existing images. The interface is powered by the Introspective Adversarial Network, a hybridization of the Generative Adversarial Network and the Variational Autoencoder designed for use in the editor. This model makes use of a novel computational block based on dilated convolutions, and Orthogonal Regularization, a novel weight regularization method. The model was validate on CelebA, SVHN, and ImageNet, and produce samples and reconstructions with high visual fidelity.

Paper: http://arxiv.org/abs/1609.07093
Sourcecode: https://github.com/ajbrock/Neural-Photo-Editor

#DeepLearning #MachineLearning #Autoencoder
 ·  תרגם
19
4
הוסף תגובה...

Lukas Masuch

שותף באופן ציבורי  - 
 
Google's machine learning model is better than the median board-certified ophthalmologist in assessing signs of diabetic retinopathy

The Google Brain team has been focusing some of our their efforts on how machine learning can transform healthcare. Diabetic retinopathy is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. This deep learning algorithm is capable of interpreting signs of DR in retinal photographs, potentially helping doctors screen more patients in settings with limited resources. Automated diabetic retinopathy screening methods with high accuracy have the strong potential to assist doctors in evaluating more patients and quickly routing those who need help to a specialist.

This is an example of the transformative potential of machine learning for healthcare, because in many parts of the world, there simply aren't enough ophthalmologists to screen everyone for DR (the actual cameras to take the retinal images are not that expensive and so the real bottleneck is the time of skilled ophthalmologists to interpret the images).

Paper: http://jamanetwork.com/journals/jama/fullarticle/2588763

#DeepLearning #ComputerVision #Healthcare #Google
 ·  תרגם
22
1
הוסף תגובה...
תגובהאחת

Lukas Masuch

שותף באופן ציבורי  - 
 
Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition

Microsoft has made a major breakthrough in speech recognition, creating a technology that recognizes the words in a conversation as well as a person does. A team of researchers and engineers in Microsoft Artificial Intelligence and Research reported a speech recognition system that makes the same or fewer errors than professional transcriptionists. The researchers reported a word error rate (WER) of 5.9 percent, down from the 6.3 percent WER the team reported just last month. The 5.9 percent error rate is about equal to that of people who were asked to transcribe the same conversation, and it’s the lowest ever recorded against the industry standard Switchboard speech recognition task. The milestone means that, for the first time, a computer can recognize the words in a conversation as well as a person would. The next frontier is to move from recognition to understanding.

Paper: https://arxiv.org/pdf/1610.05256v1.pdf

#DeepLearning #SpeechRecognition #MachineLearning
 ·  תרגם
Microsoft has made a major breakthrough in speech recognition, creating a technology that recognizes the words in a conversation as well as a person does. In a paper published Monday, … Read more »
35
5
תמונת הפרופיל של abiola gbengaתמונת הפרופיל של Ajibola Yekinni
5 תגובות
 
here it comes
 ·  תרגם
הוסף תגובה...

Lukas Masuch

שותף באופן ציבורי  - 
 
Iris.ai - The Artificial Intelligence that reads science

Iris AI is your Science assistant, helping you map out relevant research for your thesis work or R&D project. She will double your productivity over existing tools such as Google Scholar. The flow is simple. Give Iris the URL to a research paper. She reads the abstract, maps out the key concepts and presents you with the most relevant articles from more than 30M Open Access papers. With a nicely visualized overview you can navigate around until you find what you need, and directly download the paper - or use the paper to make a new map.

#ArtificialIntelligence #Science
 ·  תרגם
Iris AI will analyze the abstract of your research paper, present the key concepts, and link those with research papers
26
7
תמונת הפרופיל של Denise Garciaתמונת הפרופיל של Gerallt Franke
5 תגובות
 
Good use of Voronoi Graphs for visualization of the weighted concepts. I assume Iris.Ai would only have to read and train from the abstracts? Matching the key concepts to find other papers is not a bad idea. It would however be real super nice to have an automated way to build a reference listing by ticking some of the things you might want to cite, then have a button that builds an APA6 style reference sheet for auto citing!
Iris.Ai
 ·  תרגם
הוסף תגובה...

Lukas Masuch

שותף באופן ציבורי  - 
 
How Robots Can Acquire New Skills from Their Shared Experience via Deep Learning

If we enable robots to transmit their experiences to each other, could they learn to perform motion skills in close coordination with sensing in realistic environments? Perhaps one of the simplest ways for robots to teach each other is to pool information about their successes and failures in the world.

In these experiments Google researchers tasked robots with trying to move their arms to goal locations, or reaching to and opening a door. Each robot has a copy of a neural network that allows it to estimate the value of taking a given action in a given state. By querying this network, the robot can quickly decide what actions might be worth taking in the world. When a robot acts, we add noise to the actions it selects, so the resulting behavior is sometimes a bit better than previously observed, and sometimes a bit worse. This allows each robot to explore different ways of approaching a task. Records of the actions taken by the robots, their behaviors, and the final outcomes are sent back to a central server. The server collects the experiences from all of the robots and uses them to iteratively improve the neural network that estimates value for different states and actions. The model-free algorithms we employed look across both good and bad experiences and distill these into a new network that is better at understanding how action and success are related. Then, at regular intervals, each robot takes a copy of the updated network from the server and begins to act using the information in its new network. Given that this updated network is a bit better at estimating the true value of actions in the world, the robots will produce better behavior. This cycle can then be repeated to continue improving on the task. In the video below, a robot explores the door opening task.

Paper: https://arxiv.org/abs/1610.00633

#DeepLearning #MachineLearning #Robots #Google
 ·  תרגם
16
3
הוסף תגובה...

Lukas Masuch

שותף באופן ציבורי  - 
 
Combining satellite imagery and machine learning to predict poverty

The elimination of poverty worldwide is the first of 17 UN Sustainable Development Goals for the year 2030. To track progress towards this goal, we require more frequent and more reliable data on the distribution of poverty than traditional data collection methods can provide. In this project, Stanford researchers propose an approach that combines machine learning with high-resolution satellite imagery to provide new data on socioeconomic indicators of poverty and wealth. By creating a deep-learning algorithm that can recognize signs of poverty in satellite images – such as condition of roads – the team sorted through a million images to accurately identify economic conditions in five African countries.

Homepage: http://sustain.stanford.edu
Paper: http://science.sciencemag.org/content/353/6301/790

#Satellite #Stanford #Poverty #DeepLearning #MachineLearning
 ·  תרגם
21
2
תמונת הפרופיל של Eric Boadi
 
Nice
 ·  תרגם
הוסף תגובה...