From 11c8014a2db0d99b0672eb39cfd873be7a6f4232 Mon Sep 17 00:00:00 2001 From: KalyanKS-NLP Date: Sun, 13 Apr 2025 11:20:52 +0530 Subject: [PATCH] Add PydanticAI Evals under LLM Evaluation --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 67ab4cc..962686e 100644 --- a/README.md +++ b/README.md @@ -227,6 +227,8 @@ This repository contains a curated list of 120+ LLM libraries category wise. | AgentEvals | Evaluators and utilities for evaluating the performance of your agents. | [Link](https://github.com/langchain-ai/agentevals) | | LLMBox | A comprehensive library for implementing LLMs, including a unified training pipeline and comprehensive model evaluation. | [Link](https://github.com/RUCAIBox/LLMBox) | | Opik | An open-source end-to-end LLM Development Platform which also includes LLM evaluation. | [Link](https://github.com/comet-ml/opik) | +| PydanticAI Evals | A powerful evaluation framework designed to help you systematically evaluate the performance of LLM applications. | [Link](https://ai.pydantic.dev/evals/) | +