Multitask variational autoencoders
Master thesis
Permanent lenke
https://hdl.handle.net/11250/3142974Utgivelsesdato
2024-06-03Metadata
Vis full innførselSamlinger
- Master theses [220]
Sammendrag
Variational autoencoders (VAEs) are widely used for generative modeling and repre-sentation learning tasks. This thesis presents two novel approaches aimed at enhancingthe performance of VAEs through the integration of semi-conditional variational autoen-coders (SCVAEs). The integration of SCVAEs and VAEs is motivated by the potentialfor improvement in the effectiveness of capturing the underlying data distribution andthe improvement in generating high-quality samples.The first method extends the traditional VAEs by incorporating a second conditioneddecoder, thereby enabling the model to multi-task and learn better latent representations.The second method utilizes a unified decoder for both tasks by employing sophisticatedtraining strategies. These approaches are implemented and evaluated on Gaussian VAEsand VQ-VAEs.Extensive experimentation is conducted across diverse image datasets, includingMNIST, CIFAR10, and CelebA. The results show that, in certain cases, the proposedmethods yield superior performance compared to standard VAE architectures. By bridg-ing the gap between SCVAEs and VAEs, this work gives new insights on how to improvethe methods further and opens up new avenues for future research in the field of generativemodeling.