RAGchain.benchmark.answer package
Submodules
RAGchain.benchmark.answer.metrics module
- class RAGchain.benchmark.answer.metrics.BLEU
- Bases: - BaseAnswerMetric- retrieval_metric_function(solutions: List[str], pred: str) float
 
- class RAGchain.benchmark.answer.metrics.BaseAnswerMetric
- Bases: - ABC- eval(solutions: List[str], pred: str) float
- 
                                                - Parameters:
- 
                                                        - solutions – list of solutions. If you have only one ground truth answer, you can use [answer]. 
- pred – predicted answer 
 
 
 - property metric_name
 - abstract retrieval_metric_function(solutions: List[str], pred: str) float
 
- class RAGchain.benchmark.answer.metrics.BasePassageAnswerMetric
- Bases: - BaseAnswerMetric,- ABC- eval(knowledge: List[str], pred: str) float
- 
                                                - Parameters:
- 
                                                        - knowledge – list of knowledge. Generally it is ground truth passages for a question. 
- pred – predicted answer 
 
 
 - abstract retrieval_metric_function(knowledge: List[str], pred: str) float
 
- class RAGchain.benchmark.answer.metrics.EM_answer
- Bases: - BaseAnswerMetric- retrieval_metric_function(solutions: List[str], pred: str) float
 
- class RAGchain.benchmark.answer.metrics.KF1
- Bases: - BasePassageAnswerMetric- retrieval_metric_function(knowledge: List[str], pred: str) float
 
- class RAGchain.benchmark.answer.metrics.METEOR
- Bases: - BaseAnswerMetric- retrieval_metric_function(solutions: List[str], pred: str) float
 
- class RAGchain.benchmark.answer.metrics.ROUGE
- Bases: - BaseAnswerMetric- retrieval_metric_function(solutions: List[str], pred: str) float