MGym is a specialized framework for crafting and managing multi-agent reinforcement learning (MARL) environments in Python. It enables users to define complex scenarios with multiple agents, each having customizable observation and action spaces, reward functions, and interaction rules. MGym supports both synchronous and asynchronous execution modes, providing parallel and turn-based agent simulations. Built with a familiar Gym-like API, MGym seamlessly integrates with popular RL libraries such as Stable Baselines, RLlib, and PyTorch. It includes utility modules for environment benchmarking, result visualization, and performance analytics, facilitating systematic evaluation of MARL algorithms. Its modular architecture allows rapid prototyping of cooperative, competitive, or mixed-agent tasks, empowering researchers and developers to accelerate MARL experimentation and research.