Add PydanticAI Evals under LLM Evaluation

This commit is contained in:
KalyanKS-NLP
2025-04-13 11:20:52 +05:30
parent 67d9f00d00
commit 11c8014a2d

View File

@@ -227,6 +227,8 @@ This repository contains a curated list of 120+ LLM libraries category wise.
| AgentEvals | Evaluators and utilities for evaluating the performance of your agents. | [Link](https://github.com/langchain-ai/agentevals) |
| LLMBox | A comprehensive library for implementing LLMs, including a unified training pipeline and comprehensive model evaluation. | [Link](https://github.com/RUCAIBox/LLMBox) |
| Opik | An open-source end-to-end LLM Development Platform which also includes LLM evaluation. | [Link](https://github.com/comet-ml/opik) |
| PydanticAI Evals | A powerful evaluation framework designed to help you systematically evaluate the performance of LLM applications. | [Link](https://ai.pydantic.dev/evals/) |