Torch implementations of various types of autoencoders
Lua
Permalink
Failed to load latest commit information.
models Visual decoder weights Dec 26, 2016
modules Add WTA autoencoder Nov 16, 2016
.gitignore Convert SparseAE Jul 10, 2016
LICENSE.md Initial commit Jul 10, 2016
README.md Add general denoising criterion Jan 12, 2017
main.lua Add general denoising criterion Jan 12, 2017

README.md

Autoencoders

This repository is a Torch version of Building Autoencoders in Keras, but only containing code for reference - please refer to the original blog post for an explanation of autoencoders. Training hyperparameters have not been adjusted. The following models are implemented:

  • AE: Fully-connected autoencoder
  • SparseAE: Sparse autoencoder
  • DeepAE: Deep (fully-connected) autoencoder
  • ConvAE: Convolutional autoencoder
  • UpconvAE: Upconvolutional autoencoder - also known by several other names (bonus)
  • DenoisingAE: Denoising (convolutional) autoencoder
  • Seq2SeqAE: Sequence-to-sequence autoencoder
  • VAE: Variational autoencoder
  • CatVAE: Categorical variational autoencoder (bonus)
  • AAE: Adversarial autoencoder (bonus)
  • WTA-AE: Winner-take-all autoencoder (bonus)

Different models can be chosen using th main.lua -model <modelName>.

The denoising criterion can be used to replace the standard (autoencoder) reconstruction criterion by using the denoising flag. For example, a denoising AAE (DAAE) can be set up using th main.lua -model AAE -denoising. The corruption process is additive Gaussian noise ~ N(0, 0.5).

MCMC sampling can be used for VAEs, CatVAEs and AAEs with th main.lua -model <modelName> -mcmc <steps>. To see the effects of MCMC sampling with this simple setup it is best to choose a large standard deviation, e.g. -sampleStd 5, for the Gaussian distribution to draw the initial samples from.

Requirements

The following luarocks packages are required:

  • mnist
  • dpnn (for DenoisingAE)
  • rnn (for Seq2SeqAE)