DEf-MARL (Decentralized Execution Framework for Multi-Agent Reinforcement Learning) provides a robust infrastructure to execute and train cooperative agents without centralized controllers. It leverages peer-to-peer communication protocols to share policies and observations among agents, enabling coordination through local interactions. The framework integrates seamlessly with common RL toolkits like PyTorch and TensorFlow, offering customizable environment wrappers, distributed rollout collection, and gradient synchronization modules. Users can define agent-specific observation spaces, reward functions, and communication topologies. DEf-MARL supports dynamic agent addition and removal at runtime, fault-tolerant execution by replicating critical state across nodes, and adaptive communication scheduling to balance exploration and exploitation. It accelerates training by parallelizing environment simulations and reducing central bottlenecks, making it suitable for large-scale MARL research and industrial simulations.