While advances in prompt engineering and RAG have improved LLM proficiency in field-specific, specialized tasks, there has yet to be an industry standard or accepted evaluation metric of the highly fragmented RAG solutions that are currently being deployed. Thus, in this work, we focused on building a robust LLM and RAG evaluation platform. We contribute 1) a platform that evaluates an RAG system's performance on a multimodal input context for LLM question answering and 2) MRAFE: Multimodal Retrieval Augmented Feature Extractor, which processes information from the input to our platform. Through various automated and systematic hand testing, we find that our evaluation benchmarks are useful in determining noise robustness, negative rejection, information integration, and counterfactual rejection. Such a platform would serve as a useful tool for developers iterating on retrieval systems and regulatory bodies creating AI-focused governance alike.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Multimodal Retrieval Augmented Generation Evaluation Benchmark


    Contributors:


    Publication date :

    2024-06-24


    Size :

    1275043 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    A comparison of 3D shape retrieval methods based on a large-scale benchmark supporting multimodal queries

    Li, B. / Lu, Y. / Li, C. et al. | British Library Online Contents | 2015



    Global Optimization of Multimodal Aerodynamic Optimization Benchmark Case

    Poole, Daniel J. / Allen, Christian B. / Rendall, T. | AIAA | 2017


    RAGTraffic: Utilizing Retrieval-Augmented Generation for Intelligent Traffic Signal Control

    Zhang, Zhendong / Shen, Zhen / Yuan, Min et al. | IEEE | 2024


    A Retrieval-Augmented Generation-Based Method for Aviation Accident Data Analysis

    Yang, Jianzhong / Xiang, Xinyu / Chen, Xiyuan | IEEE | 2024