Comprehensive IA de jeu Tools for Every Need

Get access to IA de jeu solutions that address multiple requirements. One-stop resources for streamlined workflows.

IA de jeu

  • BomberManAI is a Python-based AI agent that autonomously navigates and battles in Bomberman game environments using search algorithms.
    0
    0
    What is BomberManAI?
    BomberManAI is an AI agent designed to play the classic Bomberman game autonomously. Developed in Python, it interfaces with a game environment to perceive map states, available moves, and opponent positions in real time. The core algorithm combines A* pathfinding, breadth-first search for reachability analysis, and a heuristic evaluation function to determine optimal bomb placement and evasion strategies. The agent handles dynamic obstacles, power-ups, and multiple opponents on various map layouts. Its modular architecture enables developers to experiment with custom heuristics, reinforcement learning modules, or alternative decision-making strategies. Ideal for game AI researchers, students, and competitive bot developers, BomberManAI provides a flexible framework for testing and improving autonomous gaming agents.
  • An RL framework offering PPO, DQN training and evaluation tools for developing competitive Pommerman game agents.
    0
    0
    What is PommerLearn?
    PommerLearn enables researchers and developers to train multi-agent RL bots in the Pommerman game environment. It includes ready-to-use implementations of popular algorithms (PPO, DQN), flexible configuration files for hyperparameters, automatic logging and visualization of training metrics, model checkpointing, and evaluation scripts. Its modular architecture makes it easy to extend with new algorithms, customize environments, and integrate with standard ML libraries such as PyTorch.
  • VMAS is a modular MARL framework that enables GPU-accelerated multi-agent environment simulation and training with built-in algorithms.
    0
    0
    What is VMAS?
    VMAS is a comprehensive toolkit for building and training multi-agent systems using deep reinforcement learning. It supports GPU-based parallel simulation of hundreds of environment instances, enabling high-throughput data collection and scalable training. VMAS includes implementations of popular MARL algorithms like PPO, MADDPG, QMIX, and COMA, along with modular policy and environment interfaces for rapid prototyping. The framework facilitates centralized training with decentralized execution (CTDE), offers customizable reward shaping, observation spaces, and callback hooks for logging and visualization. With its modular design, VMAS seamlessly integrates with PyTorch models and external environments, making it ideal for research in cooperative, competitive, and mixed-motive tasks across robotics, traffic control, resource allocation, and game AI scenarios.
  • Open source TensorFlow-based Deep Q-Network agent that learns to play Atari Breakout using experience replay and target networks.
    0
    0
    What is DQN-Deep-Q-Network-Atari-Breakout-TensorFlow?
    DQN-Deep-Q-Network-Atari-Breakout-TensorFlow provides a complete implementation of the DQN algorithm tailored for the Atari Breakout environment. It uses a convolutional neural network to approximate Q-values, applies experience replay to break correlations between sequential observations, and employs a periodically updated target network to stabilize training. The agent follows an epsilon-greedy policy for exploration and can be trained from scratch on raw pixel input. The repository includes configuration files, training scripts to monitor reward growth over episodes, evaluation scripts to test trained models, and TensorBoard utilities for visualizing training metrics. Users can adjust hyperparameters such as learning rate, replay buffer size, and batch size to experiment with different setups.
Featured