Open Agent Leaderboard offers a complete evaluation pipeline for open-source AI agents. It includes a curated task suite covering reasoning, planning, question answering, and tool usage, an automated harness to run agents in isolated environments, and scripts to collect performance metrics such as success rate, runtime, and resource consumption. Results are aggregated and displayed on a web-based leaderboard with filters, charts, and historical comparisons. The framework supports Docker for reproducible setups, integration templates for popular agent architectures, and extensible configurations to add new tasks or metrics easily.