The mixture-of-experts (MoE) architecture is an approach to aggregate several expert components via an additional gating module, which learns to predict the most suitable distribution of the expert’s outputs for each input. An MoE thus not only relies on redundancy for increased robustness—we also demonstrate how this architecture can provide additional interpretability, while retaining performance similar to a standalone network. As an example, we train expert networks to perform semantic segmentation of the traffic scenes and combine them into an MoE with an additional gating network. Our experiments with two different expert model architectures (FRRN and DeepLabv3+) reveal that the MoE is able to reach, and for certain data subsets even surpass, the baseline performance and also outperforms a simple aggregation via ensembling. A further advantage of an MoE is the increased interpretability—a comparison of pixel-wise predictions of the whole MoE model and the participating experts’ help to identify regions of high uncertainty in an input.
Evaluating Mixture-of-Experts Architectures for Network Aggregation
Deep Neural Networks and Data for Automated Driving ; Chapter : 11 ; 315-333
2022-06-18
19 pages
Article/Chapter (Book)
Electronic Resource
English
Traffic speed forecasting by mixture of experts
IEEE | 2011
|A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction
ArXiv | 2024
|Evaluating alternative air defense architectures
Tema Archive | 1987
|Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction
ArXiv | 2024
|