Newest обучение с подкреплением Solutions for 2024

Explore cutting-edge обучение с подкреплением tools launched in 2024. Perfect for staying ahead in your field.

обучение с подкреплением

  • Gym-Recsys provides customizable OpenAI Gym environments for scalable training and evaluation of reinforcement learning recommendation agents
    0
    0
    What is Gym-Recsys?
    Gym-Recsys is a toolbox that wraps recommendation tasks into OpenAI Gym environments, allowing reinforcement learning algorithms to interact with simulated user-item matrices step by step. It provides synthetic user behavior generators, supports loading popular datasets, and delivers standard recommendation metrics like Precision@K and NDCG. Users can customize reward functions, user models, and item pools to experiment with different RL-based recommendation strategies in a reproducible manner.
  • FlowRL AI enables real-time metric-driven UI personalization using reinforcement learning.
    0
    0
    What is flowRL?
    FlowRL AI is a powerful platform that provides real-time UI personalization using reinforcement learning. By tailoring the user interface to meet individual user needs and preferences, FlowRL drives significant improvements in key business metrics. The platform is designed to dynamically adjust UI elements based on live data, enabling businesses to deliver highly personalized user experiences that increase engagement and conversion rates.
  • A collection of customizable grid-world environments compatible with OpenAI Gym for reinforcement learning algorithm development and testing.
    0
    0
    What is GridWorldEnvs?
    GridWorldEnvs offers a comprehensive suite of grid-world environments to support the design, testing, and benchmarking of reinforcement learning and multi-agent systems. Users can easily configure grid dimensions, agent start positions, goal locations, obstacles, reward structures, and action spaces. The library includes ready-to-use templates such as classic grid navigation, obstacle avoidance, and cooperative tasks, while also allowing custom scenario definitions via JSON or Python classes. Seamless integration with the OpenAI Gym API means that standard RL algorithms can be applied directly. Additionally, GridWorldEnvs supports single-agent and multi-agent experiments, logging, and visualization utilities for tracking agent performance.
  • gym-fx provides a customizable OpenAI Gym environment to train and evaluate reinforcement learning agents for Forex trading strategies.
    0
    0
    What is gym-fx?
    gym-fx is an open-source Python library that implements a simulated Forex trading environment using the OpenAI Gym interface. It offers support for multiple currency pairs, integration of historical price feeds, technical indicators, and fully customizable reward functions. By providing a standardized API, gym-fx simplifies the process of benchmarking and developing reinforcement learning algorithms for algorithmic trading. Users can configure market slippage, transaction costs, and observation spaces to closely mimic live trading scenarios, facilitating robust strategy development and evaluation.
  • gym-llm offers Gym-style environments for benchmarking and training LLM agents on conversational and decision-making tasks.
    0
    0
    What is gym-llm?
    gym-llm extends the OpenAI Gym ecosystem to large language models by defining text-based environments where LLM agents interact through prompts and actions. Each environment follows Gym’s step, reset, and render conventions, emitting observations as text and accepting model-generated responses as actions. Developers can craft custom tasks by specifying prompt templates, reward calculations, and termination conditions, enabling sophisticated decision-making and conversational benchmarks. Integration with popular RL libraries, logging tools, and configurable evaluation metrics facilitates end-to-end experimentation. Whether assessing an LLM’s ability to solve puzzles, manage dialogues, or navigate structured tasks, gym-llm provides a standardized, reproducible framework for research and development of advanced language agents.
  • A Python-based OpenAI Gym environment offering customizable multi-room gridworlds for reinforcement learning agents’ navigation and exploration research.
    0
    0
    What is gym-multigrid?
    gym-multigrid provides a suite of customizable gridworld environments designed for multi-room navigation and exploration tasks in reinforcement learning. Each environment consists of interconnected rooms populated with objects, keys, doors, and obstacles. Users can adjust grid size, room configurations, and object placements programmatically. The library supports both full and partial observation modes, offering RGB or matrix state representations. Actions include movement, object interaction, and door manipulation. By integrating it as a Gym environment, researchers can leverage any Gym-compatible agent, seamlessly training and evaluating algorithms on tasks like key-door puzzles, object retrieval, and hierarchical planning. gym-multigrid’s modular design and minimal dependencies make it ideal for benchmarking new AI strategies.
  • HFO_DQN is a reinforcement learning framework that applies Deep Q-Network to train soccer agents in RoboCup Half Field Offense environment.
    0
    0
    What is HFO_DQN?
    HFO_DQN combines Python and TensorFlow to deliver a complete pipeline for training soccer agents using Deep Q-Networks. Users can clone the repository, install dependencies including the HFO simulator and Python libraries, and configure training parameters in YAML files. The framework implements experience replay, target network updates, epsilon-greedy exploration, and reward shaping tailored for the half field offense domain. It features scripts for agent training, performance logging, evaluation matches, and plotting results. Modular code structure allows integration of custom neural network architectures, alternative RL algorithms, and multi-agent coordination strategies. Outputs include trained models, performance metrics, and behavior visualizations, facilitating research in reinforcement learning and multi-agent systems.
  • Jason-RL equips Jason BDI agents with reinforcement learning, enabling Q-learning and SARSA-based adaptive decision making through reward experience.
    0
    0
    What is jason-RL?
    jason-RL adds a reinforcement learning layer to the Jason multi-agent framework, allowing AgentSpeak BDI agents to learn action-selection policies via reward feedback. It implements Q-learning and SARSA algorithms, supports configuration of learning parameters (learning rate, discount factor, exploration strategy), and logs training metrics. By defining reward functions in agent plans and running simulations, developers can observe agents improve decision making over time, adapting to changing environments without manual policy coding.
  • MARFT is an open-source multi-agent RL fine-tuning toolkit for collaborative AI workflows and language model optimization.
    0
    0
    What is MARFT?
    MARFT is a Python-based LLMs, enabling reproducible experiments and rapid prototyping of collaborative AI systems.
  • An open-source Minecraft-inspired RL platform enabling AI agents to learn complex tasks in customizable 3D sandbox environments.
    0
    0
    What is MineLand?
    MineLand provides a flexible 3D sandbox environment inspired by Minecraft for training reinforcement learning agents. It features Gym-compatible APIs for seamless integration with existing RL libraries such as Stable Baselines, RLlib, and custom implementations. Users gain access to a library of tasks, including resource collection, navigation, and construction challenges, each with configurable difficulty and reward structures. Real-time rendering, multi-agent scenarios, and headless modes allow for scalable training and benchmarking. Developers can design new maps, define custom reward functions, and plugin additional sensors or controls. MineLand’s open-source codebase fosters reproducible research, collaborative development, and rapid prototyping of AI agents in complex virtual worlds.
  • Simplified PyTorch implementation of AlphaStar, enabling StarCraft II RL agent training with modular network architecture and self-play.
    0
    0
    What is mini-AlphaStar?
    mini-AlphaStar demystifies the complex AlphaStar architecture by offering an accessible, open-source PyTorch framework for StarCraft II AI development. It features spatial feature encoders for screen and minimap inputs, non-spatial feature processing, LSTM memory modules, and separate policy and value networks for action selection and state evaluation. Using imitation learning to bootstrap and reinforcement learning with self-play for fine-tuning, it supports environment wrappers compatible with StarCraft II via pysc2, logging through TensorBoard, and configurable hyperparameters. Researchers and students can generate datasets from human gameplay, train models on custom scenarios, evaluate agent performance, and visualize learning curves. The modular codebase enables easy experimentation with network variants, training schedules, and multi-agent setups. Designed for education and prototyping rather than production deployment.
  • A Unity ML-Agents based environment for training cooperative multi-agent inspection tasks in customizable 3D virtual scenarios.
    0
    0
    What is Multi-Agent Inspection Simulation?
    Multi-Agent Inspection Simulation provides a comprehensive framework for simulating and training multiple autonomous agents to perform inspection tasks cooperatively within Unity 3D environments. It integrates with the Unity ML-Agents toolkit, offering configurable scenes with inspection targets, adjustable reward functions, and agent behavior parameters. Researchers can script custom environments, define the number of agents, and set training curricula via Python APIs. The package supports parallel training sessions, TensorBoard logging, and customizable observations including raycasts, camera feeds, and positional data. By adjusting hyperparameters and environment complexity, users can benchmark reinforcement learning algorithms on coverage, efficiency, and coordination metrics. The open-source codebase encourages extension for robotics prototyping, cooperative AI research, and educational demonstrations in multi-agent systems.
  • Open-source Python environment for training AI agents to cooperatively surveil and detect intruders in grid-based scenarios.
    0
    0
    What is Multi-Agent Surveillance?
    Multi-Agent Surveillance offers a flexible simulation framework where multiple AI agents act as predators or evaders in a discrete grid world. Users can configure environment parameters such as grid dimensions, number of agents, detection radii, and reward structures. The repository includes Python classes for agent behavior, scenario generation scripts, built-in visualization via matplotlib, and seamless integration with popular reinforcement learning libraries. This makes it easy to benchmark multi-agent coordination, develop custom surveillance strategies, and conduct reproducible experiments.
  • An open-source Python simulation environment for training cooperative drone swarm control with multi-agent reinforcement learning.
    0
    0
    What is Multi-Agent Drone Environment?
    Multi-Agent Drone Environment is a Python package offering a customizable multi-agent simulation for UAV swarms, built on OpenAI Gym and PyBullet. Users define multiple drone agents with kinematic and dynamic models to explore cooperative tasks such as formation flying, target tracking, and obstacle avoidance. The environment supports modular task configuration, realistic collision detection, and sensor emulation, while allowing custom reward functions and decentralized policies. Developers can integrate their own reinforcement learning algorithms, evaluate performance under varied scenarios, and visualize agent trajectories and metrics in real time. Its open-source design encourages community contributions, making it ideal for research, teaching, and prototyping advanced multi-agent control solutions.
  • Coordinates multiple autonomous waste-collecting agents using reinforcement learning to optimize collection routes efficiently.
    0
    0
    What is Multi-Agent Autonomous Waste Collection System?
    The Multi-Agent Autonomous Waste Collection System is a research-driven platform that employs multi-agent reinforcement learning to train individual waste-collecting robots to collaborate on route planning. Agents learn to avoid redundant coverage, minimize travel distance, and respond to dynamic waste generation patterns. Built in Python, the system integrates a simulation environment for testing and refining policies before real-world deployment. Users can configure map layouts, waste drop-off points, agent sensors, and reward structures to tailor behavior to specific urban areas or operational constraints.
  • Open-source multi-agent AI framework for collaborative object tracking in videos using deep learning and reinforced decision-making.
    0
    0
    What is Multi-Agent Visual Tracking?
    Multi-Agent Visual Tracking implements a distributed tracking system composed of intelligent agents that communicate to improve accuracy and robustness in video object tracking. Agents run convolutional neural networks for detection, share observations to handle occlusions, and adjust tracking parameters through reinforcement learning. Compatible with popular video datasets, it supports both training and real-time inference. Users can easily integrate it into existing pipelines and extend agent behaviors for custom applications.
  • An open-source multi-agent reinforcement learning framework enabling raw-level agent control and coordination in StarCraft II via PySC2.
    0
    0
    What is MultiAgent-Systems-StarCraft2-PySC2-Raw?
    MultiAgent-Systems-StarCraft2-PySC2-Raw offers a complete toolkit for developing, training, and evaluating multiple AI agents in StarCraft II. It exposes low-level controls for unit movement, targeting, and abilities, while allowing flexible reward design and scenario configuration. Users can easily plug in custom neural network architectures, define team-based coordination strategies, and record metrics. Built on top of PySC2, it supports parallel training, checkpointing, and visualization, making it ideal for advancing research in cooperative and adversarial multi-agent reinforcement learning.
  • A Python-based multi-agent reinforcement learning framework for developing and simulating cooperative and competitive AI agent environments.
    0
    0
    What is Multiagent_system?
    Multiagent_system offers a comprehensive toolkit for constructing and managing multi-agent environments. Users can define custom simulation scenarios, specify agent behaviors, and leverage pre-implemented algorithms such as DQN, PPO, and MADDPG. The framework supports synchronous and asynchronous training, enabling agents to interact concurrently or in turn-based setups. Built-in communication modules facilitate message passing between agents for cooperative strategies. Experiment configuration is streamlined via YAML files, and results are logged automatically to CSV or TensorBoard. Visualization scripts help interpret agent trajectories, reward evolution, and communication patterns. Designed for research and production workflows, Multiagent_system seamlessly scales from single-machine prototypes to distributed training on GPU clusters.
  • A Python-based multi-agent simulation framework enabling concurrent agent collaboration, competition and training across customizable environments.
    0
    1
    What is MultiAgentes?
    MultiAgentes provides a modular architecture for defining environments and agents, supporting synchronous and asynchronous multi-agent interactions. It includes base classes for environments and agents, predefined scenarios for cooperative and competitive tasks, tools for customizing reward functions, and APIs for agent communication and observation sharing. Visualization utilities allow real-time monitoring of agent behaviors, while logging modules record performance metrics for analysis. The framework integrates seamlessly with Gym-compatible reinforcement learning libraries, enabling users to train agents using existing algorithms. MultiAgentes is designed for extensibility, allowing developers to add new environment templates, agent types, and communication protocols to suit diverse research and educational use cases.
  • Open-source framework enabling implementation and evaluation of multi-agent AI strategies in a classic Pacman game environment.
    0
    0
    What is MultiAgentPacman?
    MultiAgentPacman offers a Python-based game environment where users can implement, visualize, and benchmark multiple AI agents in the Pacman domain. It supports adversarial search algorithms like minimax, expectimax, alpha-beta pruning, as well as custom reinforcement learning or heuristic-based agents. The framework includes a simple GUI, command-line controls, and utilities to log game statistics and compare agent performance under competitive or cooperative scenarios.
Featured