While advances in prompt engineering and RAG have improved LLM proficiency in field-specific, specialized tasks, there has yet to be an industry standard or accepted evaluation metric of the highly fragmented RAG solutions that are currently being deployed. Thus, in this work, we focused on building a robust LLM and RAG evaluation platform. We contribute 1) a platform that evaluates an RAG system's performance on a multimodal input context for LLM question answering and 2) MRAFE: Multimodal Retrieval Augmented Feature Extractor, which processes information from the input to our platform. Through various automated and systematic hand testing, we find that our evaluation benchmarks are useful in determining noise robustness, negative rejection, information integration, and counterfactual rejection. Such a platform would serve as a useful tool for developers iterating on retrieval systems and regulatory bodies creating AI-focused governance alike.
Multimodal Retrieval Augmented Generation Evaluation Benchmark
2024-06-24
1275043 byte
Conference paper
Electronic Resource
English
British Library Online Contents | 2015
|