- Step1: Clone the repository from GitHub
- Step2: Set up the environment following the README instructions
- Step3: Run local or cloud LLM inference models
- Step4: Launch the MCP client and connect to MCP server
- Step5: Use the chat UI to interact with the LLM and evaluate MCP behavior