Mixture of Variational Autoencoders - a Fusion Between MoE and VAE
Read OriginalThis technical article proposes a novel architecture that fuses Mixture of Experts (MoE) with Variational Autoencoders (VAE). It explores how to achieve label-free conditional generation (e.g., generating specific MNIST digits) by using a manager network to route inputs to specialized VAE 'experts', all trained in an entirely unsupervised manner.
Comments
No comments yet
Be the first to share your thoughts!
Browser Extension
Get instant access to AllDevBlogs from your browser