This study examines the fundamental theory necessary for comprehending Variational Autoencoders (VAEs) and generative latent time-series models. We will also explain how these models extend the principles of VAEs to the domain of time-series data by incorporating temporal dependencies into the latent space. By leveraging the probabilistic nature of VAEs and the temporal dependencies captured by generative latent time-series models—researchers and practitioners can generate synthetic data for various applications, ranging from image generation to time-series forecasting. Through experiments and examples, we showcase the efficacy of these models in generating synthetic data that closely resembles the characteristics of the original dataset.
Exploring Variational Autoencoders and Generative Latent Time-Series Models for Synthetic Data Generation and Forecasting
2024-08-02
329226 byte
Conference paper
Electronic Resource
English
Springer Verlag | 2022
|