An increasing number of nowadays tasks, such as speech recognition, image generation, translation, classification or prediction are performed with the help of machine learning. Especially artificial neural networks (ANNs) provide convincing results for these tasks. The reasons for this success story are the drastic increase of available data sources in our more and more digitalized world as well as the development of remarkable ANN architectures. This development has led to an increasing number of model parameters together with more and more complex models. Unfortunately, this yields a loss in the interpretability of deployed models. However, there exists a natural desire to explain the deployed models, not just by empirical observations but also by analytical calculations. In this thesis, we focus on variational autoencoders (VAEs) and foster the understanding of these models. As the name suggests, VAEs are based on standard autoencoders (AEs) and therefore used to perform dimensionality reduction of data. This is achieved by a bottleneck structure within the hidden layers of the ANN. From a data input the encoder, that is the part up to the bottleneck, produces a low dimensional representation. The decoder, the part from the bottleneck to the output, uses this representation to reconstruct the input. The model is learned by minimizing the error from the reconstruction. In our point of view, the most remarkable property and, hence, also a central topic in this thesis is the auto-pruning property of VAEs. Simply speaking, the auto-pruning is preventing the VAE with thousands of parameters from overfitting. However, such a desirable property comes with the risk for the model of learning nothing at all. In this thesis, we look at VAEs and the auto-pruning from two different angles and our main contributions to research are the following: (i) We find an analytic explanation of the auto-pruning. We do so, by leveraging the framework of generalized linear models (GLMs). As a result, we are able to explain training results of VAEs before conducting the actual training. (ii) We construct a time dependent VAE and show the effects of the auto-pruning in this model. As a result, we are able to model financial data sequences and estimate the value-at-risk (VaR) of associated portfolios. Our results show that we surpass the standard benchmarks for VaR estimation.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Analytical Description of Variational Autoencoders and Application of Temporal Variational Autoencoders to Financial Risk Management


    Beteiligte:
    Buch, Robert (Autor:in)

    Erscheinungsdatum :

    2022


    Format / Umfang :

    8138 KB , XIV, 108 pages



    Medientyp :

    Sonstige


    Format :

    Elektronische Ressource


    Sprache :

    Englisch


    Schlagwörter :



    Variational Autoencoders

    Ghojogh, Benyamin / Crowley, Mark / Karray, Fakhri et al. | Springer Verlag | 2022


    Deep Tracking Portfolios Using Autoencoders and Variational Autoencoders

    Urrego, Daniel Aragón / Nieto, Oscar Eduardo Reyes / Quimbayo, Carlos Andrés Zapata | Springer Verlag | 2024


    Certifiably Robust Variational Autoencoders

    Barrett, Ben / Camuto, Alexander / Willetts, Matthew et al. | ArXiv | 2021

    Freier Zugriff

    Mixed-curvature Variational Autoencoders

    Skopek, Ondrej / Ganea, Octavian-Eugen / Bécigneul, Gary | ArXiv | 2019

    Freier Zugriff