The LLM Agents Simulation Framework enables the design, execution, and analysis of simulated environments where autonomous agents interact through large language models. Users can register multiple agent instances, assign customizable prompts and roles, and specify communication channels such as message passing or shared state. The framework orchestrates simulation cycles, collects logs, and calculates metrics like turn-taking frequency, response latency, and success rates. It supports seamless integration with OpenAI, Hugging Face, and local LLMs. Researchers can create complex scenarios—negotiation, resource allocation, or collaborative problem-solving—to observe emergent behaviors. Extensible plugin architecture allows addition of new agent behaviors, environment constraints, or visualization modules, fostering reproducible experiments.