Comprehensive custom reward functions Tools for Every Need

Get access to custom reward functions solutions that address multiple requirements. One-stop resources for streamlined workflows.

custom reward functions

  • Gym-Recsys provides customizable OpenAI Gym environments for scalable training and evaluation of reinforcement learning recommendation agents
    0
    0
    What is Gym-Recsys?
    Gym-Recsys is a toolbox that wraps recommendation tasks into OpenAI Gym environments, allowing reinforcement learning algorithms to interact with simulated user-item matrices step by step. It provides synthetic user behavior generators, supports loading popular datasets, and delivers standard recommendation metrics like Precision@K and NDCG. Users can customize reward functions, user models, and item pools to experiment with different RL-based recommendation strategies in a reproducible manner.
  • gym-fx provides a customizable OpenAI Gym environment to train and evaluate reinforcement learning agents for Forex trading strategies.
    0
    0
    What is gym-fx?
    gym-fx is an open-source Python library that implements a simulated Forex trading environment using the OpenAI Gym interface. It offers support for multiple currency pairs, integration of historical price feeds, technical indicators, and fully customizable reward functions. By providing a standardized API, gym-fx simplifies the process of benchmarking and developing reinforcement learning algorithms for algorithmic trading. Users can configure market slippage, transaction costs, and observation spaces to closely mimic live trading scenarios, facilitating robust strategy development and evaluation.
  • MARFT is an open-source multi-agent RL fine-tuning toolkit for collaborative AI workflows and language model optimization.
    0
    0
    What is MARFT?
    MARFT is a Python-based LLMs, enabling reproducible experiments and rapid prototyping of collaborative AI systems.
  • Open-source Python environment for training AI agents to cooperatively surveil and detect intruders in grid-based scenarios.
    0
    0
    What is Multi-Agent Surveillance?
    Multi-Agent Surveillance offers a flexible simulation framework where multiple AI agents act as predators or evaders in a discrete grid world. Users can configure environment parameters such as grid dimensions, number of agents, detection radii, and reward structures. The repository includes Python classes for agent behavior, scenario generation scripts, built-in visualization via matplotlib, and seamless integration with popular reinforcement learning libraries. This makes it easy to benchmark multi-agent coordination, develop custom surveillance strategies, and conduct reproducible experiments.
  • Implements decentralized multi-agent DDPG reinforcement learning using PyTorch and Unity ML-Agents for collaborative agent training.
    0
    0
    What is Multi-Agent DDPG with PyTorch & Unity ML-Agents?
    This open-source project delivers a complete multi-agent reinforcement learning framework built on PyTorch and Unity ML-Agents. It offers decentralized DDPG algorithms, environment wrappers, and training scripts. Users can configure agent policies, critic networks, replay buffers, and parallel training workers. Logging hooks allow TensorBoard monitoring, while modular code supports custom reward functions and environment parameters. The repository includes sample Unity scenes demonstrating collaborative navigation tasks, making it ideal for extending and benchmarking multi-agent scenarios in simulation.
  • RL Shooter provides a customizable Doom-based reinforcement learning environment for training AI agents to navigate and shoot targets.
    0
    0
    What is RL Shooter?
    RL Shooter is a Python-based framework that integrates ViZDoom with OpenAI Gym APIs to create a flexible reinforcement learning environment for FPS games. Users can define custom scenarios, maps, and reward structures to train agents on navigation, target detection, and shooting tasks. With configurable observation frames, action spaces, and logging facilities, it supports popular deep RL libraries such as Stable Baselines and RLlib, enabling clear performance tracking and reproducibility across experiments.
  • A lightweight Python library for creating customizable 2D grid environments to train and test reinforcement learning agents.
    0
    0
    What is Simple Playgrounds?
    Simple Playgrounds provides a modular platform for building interactive 2D grid environments where agents can navigate mazes, interact with objects, and complete tasks. Users define environment layouts, object behaviors, and reward functions via simple YAML or Python scripts. The integrated Pygame renderer delivers real-time visualization, while a step-based API ensures seamless integration with reinforcement learning libraries like Stable Baselines3. With support for multi-agent setups, collision detection, and customizable physics parameters, Simple Playgrounds streamlines the prototyping, benchmarking, and educational demonstration of AI algorithms.
  • An open-source reinforcement learning agent using PPO to train and play StarCraft II via DeepMind's PySC2 environment.
    0
    0
    What is StarCraft II Reinforcement Learning Agent?
    This repository provides an end-to-end reinforcement learning framework for StarCraft II gameplay research. The core agent uses Proximal Policy Optimization (PPO) to learn policy networks that interpret observation data from the PySC2 environment and output precise in-game actions. Developers can configure neural network layers, reward shaping, and training schedules to optimize performance. The system supports multiprocessing for efficient sample collection, logging utilities for monitoring training curves, and evaluation scripts for running trained policies against scripted or built-in AI opponents. The codebase is written in Python and leverages TensorFlow for model definition and optimization. Users can extend components such as custom reward functions, state preprocessing, or network architectures to suit specific research objectives.
Featured