RAGchain.reranker.llm package
Submodules
RAGchain.reranker.llm.llm module
The original code is from [RankGPT](https://github.com/sunnweiwei/RankGPT). I modified the code to fit the RAGchain framework.
- class RAGchain.reranker.llm.llm.LLMReranker(model_name: str = 'gpt-3.5-turbo', api_base: str | None = None, *args, **kwargs)
Bases:
BaseReranker
LLMReranker is a reranker based on RankGPT (https://github.com/sunnweiwei/RankGPT). The LLM rerank the passages by question. This reranker only supports the OpenAI models only.
- invoke(input: Input, config: RunnableConfig | None = None) Output
Transform a single input into an output. Override to implement.
- Args:
input: The input to the runnable. config: A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details.
- Returns:
The output of the runnable.
- rerank(query: str, passages: List[Passage]) List[Passage]
Reranks a list of passages based on a specific ranking algorithm.
- rerank_sliding_window(query: str, passages: List[Passage], window_size: int) List[Passage]
Reranks a list of passages based on a specific ranking algorithm with sliding window. This function is useful when the model input token size is limited like LLMs.
- Parameters:
passages – (List[Passage]): The list of passages to be reranked.
query – str: The query that was used for retrieving the passages.
window_size – (int): The size of the sliding window used for reranking.
- Returns:
List[Passage]: The reranked list of passages.
RAGchain.reranker.llm.rank_gpt module
This code is from RankGPT repo and modified a little bit for integration. Please go to https://github.com/sunnweiwei/RankGPT if you need more information.
- class RAGchain.reranker.llm.rank_gpt.SafeOpenai(keys=None, start_id=None, proxy=None, api_base: str | None = None)
Bases:
object
- chat(*args, return_text=False, reduce_length=False, **kwargs)
- text(*args, return_text=False, reduce_length=False, **kwargs)
- RAGchain.reranker.llm.rank_gpt.clean_response(response: str)
- RAGchain.reranker.llm.rank_gpt.create_permutation_instruction(item=None, rank_start=0, rank_end=100, model_name='gpt-3.5-turbo')
- RAGchain.reranker.llm.rank_gpt.get_post_prompt(query, num)
- RAGchain.reranker.llm.rank_gpt.get_prefix_prompt(query, num)
- RAGchain.reranker.llm.rank_gpt.max_tokens(model)
- RAGchain.reranker.llm.rank_gpt.num_tokens_from_messages(messages, model='gpt-3.5-turbo-0301')
Returns the number of tokens used by a list of messages.
- RAGchain.reranker.llm.rank_gpt.permutation_pipeline(item=None, rank_start=0, rank_end=100, model_name='gpt-3.5-turbo', api_key=None, api_base=None)
- RAGchain.reranker.llm.rank_gpt.receive_permutation(item, permutation, rank_start=0, rank_end=100)
- RAGchain.reranker.llm.rank_gpt.remove_duplicate(response)
- RAGchain.reranker.llm.rank_gpt.run_llm(messages, api_key=None, api_base: str | None = None, model_name='gpt-3.5-turbo')
- RAGchain.reranker.llm.rank_gpt.sliding_windows(item=None, rank_start=0, rank_end=100, window_size=20, step=10, model_name='gpt-3.5-turbo', api_key=None, api_base=None)