Configuration modules
- class chatbot_eval.config.models.ModelConfig(name, provider, model, temperature=0.0, credentials=<factory>, request_kwargs=<factory>)[source]
Bases:
objectConfiguration for a single provider-backed model.
- Parameters:
name (str)
provider (str)
model (str)
temperature (float)
credentials (dict[str, Any])
request_kwargs (dict[str, Any])
- name: str
- provider: str
- model: str
- temperature: float
- credentials: dict[str, Any]
- request_kwargs: dict[str, Any]
- class chatbot_eval.config.judges.JudgeConfig(name, model_config, prompt_path, debug=False)[source]
Bases:
objectOne judge metric configuration entry.
- Parameters:
name (str)
model_config (str)
prompt_path (str)
debug (bool)
- name: str
- model_config: str
- prompt_path: str
- debug: bool
- chatbot_eval.config.builders.load_model_config(path)[source]
Load a model configuration from JSON.
- Parameters:
path (str | Path)
- Return type:
- chatbot_eval.config.builders.load_judge_config(path)[source]
Load a judge configuration from JSON.
- Parameters:
path (str | Path)
- Return type:
- chatbot_eval.config.builders.build_chat_client(model_config)[source]
Construct the appropriate chat client for
model_config.- Parameters:
model_config (ModelConfig)
- chatbot_eval.config.builders.build_judge_metric(project_root, judge_config_path)[source]
Build an LLM judge metric with DeepSeek fallback when OpenAI is unavailable.
- Parameters:
project_root (str | Path)
judge_config_path (str | Path)
- Return type:
- chatbot_eval.config.runtime.build_bot_from_config(project_root, bot_config_path, faq_csv_path, domain_knowledge_path)[source]
Build a bot instance from config and runtime data files.
- Parameters:
project_root (str | Path)
bot_config_path (str | Path)
faq_csv_path (str | Path)
domain_knowledge_path (str | Path)