Among the multiple paradigms for unsupervised representation learning, my focus will be on autoencoders. I will give a brief overview of several autoencoder variants that I contributed to develop, and explain how they can be related to other paradigms such as probabilistic graphical models and manifold modeling. This presentation should convey a sense of what was learned, how this line of research matured, and how my vision of what representation learning should consist of changed along the way. I will also present novel perspectives, such as decoderless autoencoders, as well as other research directions and open questions that I think are worth investigating further, in our quest for the ability to autonomously learn truly better, more meaningful, representations.