AutoEncoders

Autoencoders are kind of neural networks that try to reconstruct the inputs, i.e., the output is the same as the input. It can be devided into a encoder and a decoder, which is illustrated as follows:

Encoder-Decoder architecture

Autoencoders can be used in image or sound compressing and dimensionality reduction. In specific cases they can provide more interesing and efficient data projections than PCA or other dimentionality reduction techniques. Also, Denoising Autoencoders can be used to denoise a noisy image, and can further be used in representation learning.

The following gist shows how to construct a naive autoencoder that try to rebuild the handwritten digit in immortal Mnist dataset:

Running results:

Autoencoders

For denoising autoencoders, the input is random noisy images and the output is corresponding clean image. The following gist should clear any confusion:

Result:

Denoising Autoencoder

And using convolution networks for encoder and decoder should give better results. Gist is here:

Result:

CNN Denoising Autoencoder

Autoencoders can also be used in Representation learning, and can be used to encode inputs other than images, even categorical data. Practically the hidden feature size is larger than the input size to capture more info. A detailed explanation can be found at: http://dkopczyk.quantee.co.uk/dae-part3/

References:
[1] Denoising Autoencoder by Dawid Kopczyk

Deep interest network for click-through rate prediction Astrous Convolution

Комментарии

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×