This study examines the fundamental theory necessary for comprehending Variational Autoencoders (VAEs) and generative latent time-series models. We will also explain how these models extend the principles of VAEs to the domain of time-series data by incorporating temporal dependencies into the latent space. By leveraging the probabilistic nature of VAEs and the temporal dependencies captured by generative latent time-series models—researchers and practitioners can generate synthetic data for various applications, ranging from image generation to time-series forecasting. Through experiments and examples, we showcase the efficacy of these models in generating synthetic data that closely resembles the characteristics of the original dataset.
Exploring Variational Autoencoders and Generative Latent Time-Series Models for Synthetic Data Generation and Forecasting
02.08.2024
329226 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Springer Verlag | 2022
|