This study examines the fundamental theory necessary for comprehending Variational Autoencoders (VAEs) and generative latent time-series models. We will also explain how these models extend the principles of VAEs to the domain of time-series data by incorporating temporal dependencies into the latent space. By leveraging the probabilistic nature of VAEs and the temporal dependencies captured by generative latent time-series models—researchers and practitioners can generate synthetic data for various applications, ranging from image generation to time-series forecasting. Through experiments and examples, we showcase the efficacy of these models in generating synthetic data that closely resembles the characteristics of the original dataset.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Exploring Variational Autoencoders and Generative Latent Time-Series Models for Synthetic Data Generation and Forecasting


    Beteiligte:
    Dodda, Suresh (Autor:in)


    Erscheinungsdatum :

    02.08.2024


    Format / Umfang :

    329226 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Frugal Incremental Generative Modeling using Variational Autoencoders

    Enescu, Victor / Sahbi, Hichem | ArXiv | 2025

    Freier Zugriff

    Adaptive Compression of the Latent Space in Variational Autoencoders

    Sejnova, Gabriela / Vavrecka, Michal / Stepanova, Karla | ArXiv | 2023

    Freier Zugriff


    Variational Autoencoders

    Ghojogh, Benyamin / Crowley, Mark / Karray, Fakhri et al. | Springer Verlag | 2022


    Perceptual Generative Autoencoders

    Zhang, Zijun / Zhang, Ruixiang / Li, Zongpeng et al. | ArXiv | 2019

    Freier Zugriff