JasonEnvironments delivers a collection of environment modules designed specifically for the Jason multi-agent system. Each module exposes a standardized interface so agents can perceive, act, and interact within diverse scenarios like pursuit-evasion, resource foraging, and cooperative tasks. The library is easy to integrate into existing Jason projects: just include the JAR, configure the desired environment in your agent architecture file, and launch the simulation. Developers can also extend or customize parameters and rules to tailor the environment to their research or educational needs.
Scalable MADDPG is a research-oriented framework for multi-agent reinforcement learning, offering a scalable implementation of the MADDPG algorithm. It features centralized critics during training and independent actors at runtime for stability and efficiency. The library includes Python scripts to define custom environments, configure network architectures, and adjust hyperparameters. Users can train multiple agents in parallel, monitor metrics, and visualize learning curves. It integrates with OpenAI Gym-like environments and supports GPU acceleration via TensorFlow. By providing modular components, Scalable MADDPG enables flexible experimentation on cooperative, competitive, or mixed multi-agent tasks, facilitating rapid prototyping and benchmarking.