- Step1: Set up your API keys and environment variables
- Step2: Clone the just-prompt repository and install dependencies
- Step3: Choose the desired MCP tool (e.g., prompt, prompt_from_file, ceo_and_board)
- Step4: Configure the tool parameters such as prompt text, file paths, or default models
- Step5: Execute the tool command to interact with multiple LLMs simultaneously
- Step6: Collect and analyze the responses
- Step7: Use additional features like saving outputs or listing models as needed