Search:
What's new? 2 Aug 2017
ERC Grant:
Many jobs for PhD students and PostDocs
to be hired in 2017 & 2018
2017: More Jobs
(Jobs 2016)
FAQ in AMA (Ask Me Anything) on
reddit
G+ posts on Deep Learning and AI etc
Videos (2009-)
Publications (2016)
CV (2017)
Master's in Artificial Intelligence (Fall 2017)
Contact:
Jürgen Schmidhuber
IDSIA,
Galleria 2,
6928 Manno-Lugano,
Switzerland
Fax +41 58 666666 1
Fon +41 58 666666 2
Sec +41 58 666666 6
Send spam etc to
[email protected]
Pronounce: You_again Shmidhoobuh (if you can say
Schwarzenegger &
Schumacher &
Schiffer,
then you can also say
Schmidhuber)
ON THE NET SINCE 1405
(muslim calendar)
Scientific Director of IDSIA,
Prof. of AI @ USI,
Prof. SUPSI,
ex-head of Cog Bot Lab
@ TUM,
Dr. rer. nat. habil. 1993 @ CU,
Dr. rer. nat. 1991,
Dipl. Inf. 1987
Curriculum Vitae (2017)
Portraits:
2017,
2017,
2017,
2015,
2015,
2015,
2010,
more pics (1963-2007)
RESEARCH TOPICS (more in the columns to the right):
Feedback Neural Networks,
Deep Learning &
Computer Vision &
Pattern Recognition (numerous world records on benchmark datasets, first
superhuman results), award-winning
GPU-based CNNs,
Gödel machines,
Universal AI,
Optimal Problem Solvers,
Evolution,
Reinforcement learning (RL),
Hierarchical RL,
Meta-Learning & Recursive Self- Improvement,
Artificial Curiosity & Creativity & Intrinsic Motivation & Developmental Robotics, Formal Theory of Fun & Creativity,
Theory of Beauty,
Computable Universes,
Generalized Algorithmic Information
COURSES
Machine Learning 1
Machine Learning 2
Our
Pybrain Machine Learning
Library features source code of many
new learning algorithms that cannot be found in other
libraries - see Pybrain video

ROBOTS
Learning Robots,
Elastic Robots,
Robot Population Explosion,
Statistical Robotics,
Resilient Machines,
Resilient Robots (Science 316 p 688),
CoTeSys Robots,
Cogbotlab (compare LRZ 2005),
Robot Cars,
IDSIA Robotics Lab,
also
at the EXPO21xx show room

LOW-COMPLEXITY ART
Example: Femme Fractale
(more examples),
3D Art (sculpture),
Lego Art: stable rings from LEGO bricks,
art involving JS' kids,
pics of self-improving robots:
state of the art /
the future / the far future
HISTORY
Is history converging? Again? (2006)
Computer history speedup &
Schmidhuber's law: each new breakthrough
comes twice as fast - Omega point around 2040;
see
TEDx talk +
Transcript.
AI History.
The New AI as a formal science.
Raw computing power.
Colossus (Nature 441 p 25),
Telephone (Science 319 p 1759),
First Pow(d)ered Flight (Nature 421 p 689)
MEN who left their mark:
Einstein (general relativity, 1915),
Zuse (first computer, 1935-41),
Goedel (limits of math and computation, 1931),
Turing (Turing machine, 1936: Nature
429 p 501),
Gauss (mathematician of the millennium),
Leibniz (inventor of the bit),
Schickard (father of the computer age),
Solomonoff (theory of optimal prediction),
Darwin (Nature 452 p 530),
Haber & Bosch (1913:
most influential invention of the 20th century),
Archimedes (greatest scientist ever?)

NOBEL PRIZES: Evolution of national shares
by country of birth (by citizenship):
Peace (cit),
Literature (cit),
Medicine (cit),
Chemistry (cit),
Physics (cit),
Sciences (cit),
Total (cit),
English & German
OLYMPICS
London Olympics 2012: EU gold medal count,
Beijing 2008 gold count,
EU metal of Athens 2004,
Bolt,
All Time Gold Medal Counts
of 2006,
2008,
2010,
2012.
China and former empires (letters in Newsweek, 2004-05).
The European Union - A New Kind of Empire? (2009)

FAMILY
Ulrike Krommer (wife)
Julia & Leonie (kids)
Schmidhuber's
little brother Christof,
a theoretical physicist turned finance guru (see interview).
His papers: most
famous /
most readable /
best /
craziest;
his wife:
Prof. Beliakova, a topologist.
Closest brush with fame (1981),
Bavarian Poetry
(perfect rhyme on 8x4 syllables, and even makes sense, 1990),
Public bar,
Deutsch (rarely updated)
.
|
|
 |
 |
Since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire.
His lab's Deep Learning Neural Networks (NNs) (since 1991) and
Long Short-Term Memory
have transformed machine learning and AI,
and
are now (2017) available to billions of users through the world's most valuable public companies including Google, Apple, Microsoft, Amazon, etc.
In 2011, his team was the first to win official
computer vision contests through deep NNs,
with
superhuman performance.
His research group also established the field of mathematically rigorous universal AI and recursive self-improvement in universal problem solvers that learn to learn (since 1987).
His formal theory of creativity & curiosity & fun explains art, science, music, and humor.
He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art.
He is recipient of numerous awards,
and president of the company NNAISENSE, which aims at building the first practical general purpose AI.
| |
Artificial Recurrent Neural Networks
(1989-2014).
Most work in machine learning focuses on machines
with reactive
behavior. RNNs, however, are more general sequence processors
inspired by human brains. They have adaptive
feedback connections and are
in principle as powerful as any computer.
The first RNNs could not learn to look far
back into the past. But our "Long Short-Term
Memory" (LSTM) RNN overcomes this
fundamental problem,
and efficiently learns to solve many previously unlearnable tasks.
It can be used for
speech recognition, time series prediction, music composition, etc.
In 2009,
our LSTM RNNs became the first recurrent Deep Learning
systems to win official international competitions (with secret test set
known only to the organisers) - they
outperformed all other known methods on the difficult
problem of recognizing unsegmented cursive handwriting,
and also on aspects of speech recognition.
They learn through
gradient descent and / or
evolution or both.
Compare the RNN Book Preface.
LSTM is getting popular:
Google, Apple, Microsoft, IBM, Baidu, and many other companies
use LSTM RNNs to improve large vocabulary speech recognition, machine translation, language identification / time series prediction / text-to-speech synthesis, etc.
Deep Learning & Computer Vision with
Fast Deep Neural Nets.
The future of search engines and robotics lies in image and video recognition.
Since 2009, our
Deep Learning team has won 9 (nine) first prizes
in important
and highly competitive international contests
(with secret test sets known only
to the organisers), far more than any other team.
Our neural nets also set
numerous world records, and were
the
first Deep Learners to win pattern recognition contests in general (2009),
the
first to win object detection contests (2012),
the
first to win a pure image segmentation contest (2012),
and the first machine learning methods to reach
superhuman visual recognition performance in a contest (2011).
Compare this
Google Tech Talk (2011)
and JS' first Deep Learning system of 1991,
with a Deep Learning timeline 1962-2013.
See also the
history of computer vision contests won by deep CNNs on GPU since 2011.
Gödel machine:
An old dream of computer scientists is to build an optimally
efficient universal problem solver. The
Gödel machine
can be implemented on a traditional computer and solves
any given computational problem in an optimal fashion inspired by Kurt
Gödel's celebrated self-referential formulas (1931).
It starts with an axiomatic description of itself,
and we may plug in any utility function, such as the expected
future reward of a robot.
Using an efficient proof searcher,
the Gödel machine will rewrite any part of its software
(including the proof searcher)
as soon as it has found
a proof that this will improve its future performance,
given the utility function and the typically limited computational resources.
Self-rewrites are globally optimal (no local maxima!) since provably none
of all the alternative rewrites and proofs (those that could be found by
continuing the proof search) are worth waiting for.
The Gödel machine formalizes I. J. Good's informal remarks (1965) on
an "intelligence explosion" through self-improving "super-intelligences".
Summary. FAQ.
Optimal Ordered Problem Solver.
OOPS solves one task after another, through search for
solution- computing programs. The incremental method optimally
exploits solutions to earlier tasks when possible - compare principles
of Levin's optimal universal search.
OOPS can temporarily rewrite its own search procedure, efficiently
searching for faster search methods (metasearching or
metalearning).
It is applicable to problems of optimization or prediction.
Talk slides.
Super Omegas and Generalized Kolmogorov Complexity and
Algorithmic Probability.
Kolmogorov's (left) complexity K(x) of a bitstring x is the length of the
shortest program that computes x and halts. Solomonoff's
algorithmic probability of x is the probability of guessing
a program for x. Chaitin's Omega is the halting probability
of a Turing machine with random input (Omega is known as
the "number of wisdom" because it compactly encodes all mathematical truth).
Schmidhuber generalized
all of this
to non-halting but converging programs. This led to
the shortest possible formal descriptions and to non-enumerable but limit-computable
measures and Super Omegas, and even has consequences for computable universes and
optimal inductive inference. Slides.
Universal Learning Algorithms.
There is a theoretically optimal way of
predicting the future, given the past.
It can be used to define an optimal (though
noncomputable)
rational agent that maximizes
its expected reward in almost arbitrary environments sampled
from computable probability distributions.
This work represents the first mathematically
sound theory of universal artificial intelligence - most
previous work on AI was either heuristic or very limited.
Speed Prior.
Occam's Razor: prefer simple solutions to complex ones.
But what exactly does "simple" mean? According to tradition something
is simple if it has a short description or program, that is,
it has low Kolmogorov complexity.
This leads to Solomonoff's & Levin's miraculous
probability measure which yields optimal though noncomputable predictions,
given past observations. The Speed Prior
is different though: it is a new simplicity measure based on
the fastest way of describing objects, not the shortest.
Unlike the traditional one, it leads to near-optimal computable predictions,
and provokes unusual prophecies concerning the future of our universe.
Talk slides.
Transcript of
TEDx talk.
In the Beginning was the Code.
In 1996 Schmidhuber wrote the first paper
about all possible computable universes. His
`Great Programmer'
is consistent with Zuse's
thesis (1967) of computable physics, against which there is no
physical evidence, contrary to common belief. If everything is computable, then
which exactly is our universe's program? It turns out that the simplest program
computes all universes,
not just ours. Later work (2000) on
Algorithmic Theories of Everything analyzed all the
universes with limit-computable probabilities as well as the very
limits of formal describability. This paper led to above-mentioned
generalizations of algorithmic information and
probability and Super Omegas as well as the
Speed Prior.
See comments on Wolfram's 2002 book
and letter
on randomness in physics (Nature 439, 2006).
Talk slides,
TEDx video,
transcript.
Learning Robots.
Some hardwired robots achieve impressive feats.
But they do not learn like babies do.
Traditional
reinforcement learning algorithms
are limited to simple reactive behavior and do not
work well for realistic robots.
Hence robot learning requires novel methods for
learning to identify important past events and memorize them until needed.
Our group is focusing on the above-mentioned
recurrent neural networks,
RNN evolution,
Compressed Network Search,
and policy gradients.
Collaborations:
with UniBW on robot cars,
with TUM-AM on
humanoids learning to walk,
with DLR on artificial hands.
New IDSIA projects
on developmental robotics
with curious adaptive humanoids
have started in 2009. See
AAAI 2013 Best Student Video.
Financial Forecasting.
Our most lucrative neural network application
employs a second-order method
for finding the simplest model of stock market training data.
Learning attentive vision.
Humans and other biological systems use sequential gaze shifts for
pattern recognition. This can be much more efficient than
fully parallel approaches to vision. In 1990 we built an
artificial fovea controlled by an adaptive neural controller. Without
a teacher, it learns to find targets
in a visual scene, and to track moving targets.
.
| |
Artificial Evolution.
State-of-the-art methods for network evolution
co-evolve all
neurons in parallel (excellent results in various
applications).
EVOLINO
outperforms previous methods on several
supervised learning tasks, and yields
the first recurrent support vector machines.
Probabilistic
incremental program evolution evolves
computer programs through probabilistic templates instead
of program populations (first approach to evolving entire
soccer team strategies from scratch).
As an undergrad Schmidhuber also implemented the first
genetic programming system with
loops and variable length code (1987, see below).
Our novel Natural Evolution Strategies (2008-) yield excellent
results and link
policy gradients to evolution. And
while most previous algorithms can evolve only hundreds of
adaptive parameters, but not millions, our
Compressed Network Search
(1995-) finds compact descriptions of huge networks. A 2013 variant was the first method to evolve neural network controllers with over a million weights.
Compare work on learning to think.
Interestingness & Active Exploration & Artificial Curiosity & Theory of Surprise
(1990-2010).
Schmidhuber's
curious learning agents
like to go where they
expect to learn
something. These rudimentary artificial scientists or artists
are driven by intrinsic motivation,
losing interest in both predictable and unpredictable things.
A basis for much of the recent work in Developmental Robotics since 2004.
According to Schmidhuber's formal theory of creativity,
art and science and humor are just
by-products of the desire to create / discover more data that is
predictable or compressible in hitherto unknown ways!
See Singularity Summit talk (2009).
Reinforcement Learning
in partially observable worlds.
Just like humans, reinforcement learners are supposed to
maximize expected pleasure and
minimize expected pain. Most traditional work is limited to
reactive mappings from sensory inputs to actions.
Our approaches (1989-2003) for
partially observable environments
are more general: they learn how to use memory and internal states,
sometimes through evolution of RNN.
The first universal reinforcement learner
is optimal if we ignore computation time,
and here
is one that is optimal if we don't.
The novel Natural Evolution Strategies (2008-) link
policy gradients to evolution. See also
Compressed Network Search.
Unsupervised learning; non-linear ICA; history compression.
Pattern recognition works better on non-redundant
data with independent components. Schmidhuber's
Predictability Minimization
(1992) was the first non-linear neural algorithm for learning to encode
redundant inputs in this way. It is based on co-evolution of predictors and
feature detectors
that fight each other: the detectors try to extract features
that make them unpredictable.
His neural
history compressors
(1991) compactly encode sequential data for
Deep Learning.
And
Lococode unifies regularization
and unsupervised learning. The feature detectors generated
by such unsupervised methods resemble those of
our more recent supervised neural computer vision systems.
Metalearning Machines / Learning to Learn / Recursive Self- Improvement.
Can we construct metalearning algorithms that learn better
learning algorithms? This question has been a main drive of
Schmidhuber's research since his 1987
diploma thesis.
In 1993 he introduced
self-referential weight matrices, and in
1994 self-modifying policies trained by the
"success-story algorithm"
(talk slides). His first bias-optimal metalearner
was the above-mentioned
Optimal Ordered Problem Solver (2002),
and the ultimate metalearner is the
Gödel machine (2003).
See also work on "learning to think."
Automatic Subgoal Generators and Hierarchical Learning.
There is no teacher providing useful intermediate subgoals
for our reinforcement learning systems. In the early 1990s
Schmidhuber introduced
gradient-based
(pictures)
adaptive subgoal generators; in 1997 also
discrete ones.
Program Evolution and Genetic Programming.
As an undergrad Schmidhuber used Genetic Algorithms
to evolve
computer
programs on a Symbolics LISP machine at
SIEMENS AG.
Two years later this was still novel: In 1987 he published
world's 2nd paper
on pure "Genetic Programming" (the first was Cramer's in 1985)
and the first paper on Meta-Genetic
Programming.
Learning Economies
with Credit Conservation.
In the late 1980s Schmidhuber developed the first credit-conserving
reinforcement learning system based on market
principles, and also the
first neural one.
Neural Heat Exchanger.
Like a physical heat exchanger,
but with neurons instead of liquid.
Perceptions warm up, expectations cool down.
Fast weights instead of recurrent nets.
A slowly changing feedforward neural net learns to quickly
manipulate short-term memory
in quickly changing synapses of
another net. More fast weights.
Evolution of fast weight control.
Complexity-Based Theory of Beauty.
In 1997 Schmidhuber claimed: among several patterns classified as "comparable"
by some subjective observer, the subjectively most beautiful
is the one with the simplest description, given the
observer's particular method for encoding and memorizing it.
Exemplary applications include
low-complexity faces
and Low-Complexity Art,
the computer-age equivalent of minimal art (Leonardo, 1997).
A low-complexity artwork such as this Femme Fractale both
`looks right' and is computable
by a short program; a typical observer should be
able to see its simplicity. The drive to
create such art is explained by the
formal theory of creativity.
Artificial Ants & Swarm Intelligence.
IDSIA's Artificial Ant Algorithms are multiagent
optimizers that use local search techniques
and communicate via artificial
pheromones that evaporate over time. They broke several important
benchmark world records. This work got numerous
reviews in journals such as
Nature, Science, Scientific American, TIME,
NY Times, Spiegel, Economist, etc. It led to an IDSIA
spin-off company called
ANTOPTIMA.
See also the
AAAI 2011 best video on swarmbots.
All cartoons & artwork &
Fibonacci web design
templates
copyright © by Jürgen Schmidhuber
(except when indicated otherwise).

| |

|
|