Ultimate hyperparameter tuning Solutions for Everyone

Discover all-in-one hyperparameter tuning tools that adapt to your needs. Reach new heights of productivity with ease.

hyperparameter tuning

  • HFO_DQN is a reinforcement learning framework that applies Deep Q-Network to train soccer agents in RoboCup Half Field Offense environment.
    0
    0
    What is HFO_DQN?
    HFO_DQN combines Python and TensorFlow to deliver a complete pipeline for training soccer agents using Deep Q-Networks. Users can clone the repository, install dependencies including the HFO simulator and Python libraries, and configure training parameters in YAML files. The framework implements experience replay, target network updates, epsilon-greedy exploration, and reward shaping tailored for the half field offense domain. It features scripts for agent training, performance logging, evaluation matches, and plotting results. Modular code structure allows integration of custom neural network architectures, alternative RL algorithms, and multi-agent coordination strategies. Outputs include trained models, performance metrics, and behavior visualizations, facilitating research in reinforcement learning and multi-agent systems.
  • A platform to prototype, evaluate, and improve LLM applications rapidly.
    0
    0
    What is Inductor?
    Inductor.ai is a robust platform aimed at empowering developers to build, prototype, and refine Large Language Model (LLM) applications. Through systematic evaluation and constant iteration, it facilitates the development of reliable, high-quality LLM-powered functionality. With features like custom playgrounds, continuous testing, and hyperparameter optimization, Inductor ensures that your LLM applications are always market-ready, streamlined, and cost-effective.
  • LossLens AI is an AI-powered assistant analyzing machine learning training loss curves to diagnose issues and suggest hyperparameter improvements.
    0
    0
    What is LossLens AI?
    LossLens AI is an intelligent assistant designed to help machine learning practitioners understand and optimize their model training processes. By ingesting loss logs and metrics, it generates interactive visualizations of training and validation curves, identifies divergence or overfitting issues, and provides natural language explanations. Leveraging advanced language models, it offers context-aware hyperparameter tuning suggestions and early stopping advice. The agent supports collaborative workflows through a REST API or web interface, enabling teams to iterate faster and achieve better model performance.
  • Simplified PyTorch implementation of AlphaStar, enabling StarCraft II RL agent training with modular network architecture and self-play.
    0
    0
    What is mini-AlphaStar?
    mini-AlphaStar demystifies the complex AlphaStar architecture by offering an accessible, open-source PyTorch framework for StarCraft II AI development. It features spatial feature encoders for screen and minimap inputs, non-spatial feature processing, LSTM memory modules, and separate policy and value networks for action selection and state evaluation. Using imitation learning to bootstrap and reinforcement learning with self-play for fine-tuning, it supports environment wrappers compatible with StarCraft II via pysc2, logging through TensorBoard, and configurable hyperparameters. Researchers and students can generate datasets from human gameplay, train models on custom scenarios, evaluate agent performance, and visualize learning curves. The modular codebase enables easy experimentation with network variants, training schedules, and multi-agent setups. Designed for education and prototyping rather than production deployment.
  • Model ML offers advanced automated machine learning tools for developers.
    0
    0
    What is Model ML?
    Model ML utilizes state-of-the-art algorithms to simplify the machine learning lifecycle. It allows users to automate data preprocessing, model selection, and hyperparameter tuning, making it easier for developers to create highly accurate predictive models without deep technical expertise. With user-friendly interfaces and extensive documentation, Model ML is ideal for teams looking to leverage machine learning capabilities in their projects quickly.
  • An open-source framework for training and evaluating cooperative and competitive multi-agent reinforcement learning algorithms across diverse environments.
    0
    0
    What is Multi-Agent Reinforcement Learning?
    Multi-Agent Reinforcement Learning by alaamoheb is a comprehensive open-source library designed to facilitate the development, training, and evaluation of multiple agents acting in shared environments. It includes modular implementations of value-based and policy-based algorithms such as DQN, PPO, MADDPG, and more. The repository supports integration with OpenAI Gym, Unity ML-Agents, and the StarCraft Multi-Agent Challenge, allowing users to experiment in both research and real-world inspired scenarios. With configurable YAML-based experiment setups, logging utilities, and visualization tools, practitioners can monitor learning curves, tune hyperparameters, and compare different algorithms. This framework accelerates experimentation in cooperative, competitive, and mixed multi-agent tasks, streamlining reproducible research and benchmarking.
  • Implements decentralized multi-agent DDPG reinforcement learning using PyTorch and Unity ML-Agents for collaborative agent training.
    0
    0
    What is Multi-Agent DDPG with PyTorch & Unity ML-Agents?
    This open-source project delivers a complete multi-agent reinforcement learning framework built on PyTorch and Unity ML-Agents. It offers decentralized DDPG algorithms, environment wrappers, and training scripts. Users can configure agent policies, critic networks, replay buffers, and parallel training workers. Logging hooks allow TensorBoard monitoring, while modular code supports custom reward functions and environment parameters. The repository includes sample Unity scenes demonstrating collaborative navigation tasks, making it ideal for extending and benchmarking multi-agent scenarios in simulation.
  • An open-source Python framework enabling design, training, and evaluation of cooperative and competitive multi-agent reinforcement learning systems.
    0
    0
    What is MultiAgentSystems?
    MultiAgentSystems is designed to simplify the process of building and evaluating multi-agent reinforcement learning (MARL) applications. The platform includes implementations of state-of-the-art algorithms like MADDPG, QMIX, VDN, and centralized training with decentralized execution. It features modular environment wrappers compatible with OpenAI Gym, communication protocols for agent interaction, and logging utilities to track metrics such as reward shaping and convergence rates. Researchers can customize agent architectures, tune hyperparameters, and simulate settings including cooperative navigation, resource allocation, and adversarial games. With built-in support for PyTorch, GPU acceleration, and TensorBoard integration, MultiAgentSystems accelerates experimentation and benchmarking in collaborative and competitive multi-agent domains.
  • A Python framework enabling the design, simulation, and reinforcement learning of cooperative multi-agent systems.
    0
    0
    What is MultiAgentModel?
    MultiAgentModel provides a unified API to define custom environments and agent classes for multi-agent scenarios. Developers can specify observation and action spaces, reward structures, and communication channels. Built-in support for popular RL algorithms like PPO, DQN, and A2C allows training with minimal configuration. Real-time visualization tools help monitor agent interactions and performance metrics. The modular architecture ensures easy integration of new algorithms and custom modules. It also includes a flexible configuration system for hyperparameter tuning, logging utilities for experiment tracking, and compatibility with OpenAI Gym environments for seamless portability. Users can collaborate on shared environments and replay logged sessions for analysis.
  • A Keras-based implementation of Multi-Agent Deep Deterministic Policy Gradient for cooperative and competitive multi-agent RL.
    0
    0
    What is MADDPG-Keras?
    MADDPG-Keras delivers a complete framework for multi-agent reinforcement learning research by implementing the MADDPG algorithm in Keras. It supports continuous action spaces, multiple agents, and standard OpenAI Gym environments. Researchers and developers can configure neural network architectures, training hyperparameters, and reward functions, then launch experiments with built-in logging and model checkpointing to accelerate multi-agent policy learning and benchmarking.
  • An open-source framework enabling training, deployment, and evaluation of multi-agent reinforcement learning models for cooperative and competitive tasks.
    0
    0
    What is NKC Multi-Agent Models?
    NKC Multi-Agent Models provides researchers and developers with a comprehensive toolkit for designing, training, and evaluating multi-agent reinforcement learning systems. It features a modular architecture where users define custom agent policies, environment dynamics, and reward structures. Seamless integration with OpenAI Gym allows for rapid prototyping, while support for TensorFlow and PyTorch enables flexibility in selecting learning backends. The framework includes utilities for experience replay, centralized training with decentralized execution, and distributed training across multiple GPUs. Extensive logging and visualization modules capture performance metrics, facilitating benchmarking and hyperparameter tuning. By simplifying the setup of cooperative, competitive, and mixed-motive scenarios, NKC Multi-Agent Models accelerates experimentation in domains such as autonomous vehicles, robotic swarms, and game AI.
  • An RL framework offering PPO, DQN training and evaluation tools for developing competitive Pommerman game agents.
    0
    0
    What is PommerLearn?
    PommerLearn enables researchers and developers to train multi-agent RL bots in the Pommerman game environment. It includes ready-to-use implementations of popular algorithms (PPO, DQN), flexible configuration files for hyperparameters, automatic logging and visualization of training metrics, model checkpointing, and evaluation scripts. Its modular architecture makes it easy to extend with new algorithms, customize environments, and integrate with standard ML libraries such as PyTorch.
  • A DRL pipeline that resets underperforming agents to previous top performers to improve multi-agent reinforcement learning stability and performance.
    0
    0
    What is Selective Reincarnation for Multi-Agent Reinforcement Learning?
    Selective Reincarnation introduces a dynamic population-based training mechanism tailored for multi-agent reinforcement learning. Each agent’s performance is regularly evaluated against predefined thresholds. When an agent’s performance falls below its peers, its weights are reset to those of the current top performer, effectively reincarnating it with proven behaviors. This approach maintains diversity by only resetting underperformers, minimizing destructive resets while guiding exploration toward high-reward policies. By enabling targeted heredity of neural network parameters, the pipeline reduces variance and accelerates convergence across cooperative or competitive multi-agent environments. Compatible with any policy gradient-based MARL algorithm, the implementation integrates seamlessly into PyTorch-based workflows and includes configurable hyperparameters for evaluation frequency, selection criteria, and reset strategy tuning.
  • Vanilla Agents provides ready-to-use implementations of DQN, PPO, and A2C RL agents with customizable training pipelines.
    0
    0
    What is Vanilla Agents?
    Vanilla Agents is a lightweight PyTorch-based framework that delivers modular and extensible implementations of core reinforcement learning agents. It supports algorithms like DQN, Double DQN, PPO, and A2C, with pluggable environment wrappers compatible with OpenAI Gym. Users can configure hyperparameters, log training metrics, save checkpoints, and visualize learning curves. The codebase is organized for clarity, making it ideal for research prototyping, educational use, and benchmarking new ideas in RL.
  • Acme is a modular reinforcement learning framework offering reusable agent components and efficient distributed training pipelines.
    0
    0
    What is Acme?
    Acme is a Python-based framework that simplifies the development and evaluation of reinforcement learning agents. It offers a collection of prebuilt agent implementations (e.g., DQN, PPO, SAC), environment wrappers, replay buffers, and distributed execution engines. Researchers can mix and match components to prototype new algorithms, monitor training metrics with built-in logging, and leverage scalable distributed pipelines for large-scale experiments. Acme integrates with TensorFlow and JAX, supports custom environments via OpenAI Gym interfaces, and includes utilities for checkpointing, evaluation, and hyperparameter configuration.
  • AutoML-Agent automates data preprocessing, feature engineering, model search, hyperparameter tuning, and deployment via LLM-driven workflows for streamlined ML pipelines.
    0
    0
    What is AutoML-Agent?
    AutoML-Agent provides a versatile Python-based framework that orchestrates every stage of the machine learning lifecycle through an intelligent agent interface. Starting with automated data ingestion, it performs exploratory analysis, missing value handling, and feature engineering using configurable pipelines. Next, it conducts model architecture search and hyperparameter optimization powered by large language models to suggest optimal configurations. The agent then runs experiments in parallel, tracking metrics and visualizations to compare performance. Once the best model is identified, AutoML-Agent streamlines deployment by generating Docker containers or cloud-native artifacts compatible with common MLOps platforms. Users can further customize workflows via plugin modules and monitor model drift over time, ensuring robust, efficient, and reproducible AI solutions in production environments.
  • An AI-powered trading agent using deep reinforcement learning to optimize stock and crypto trading strategies in live markets.
    0
    0
    What is Deep Trading Agent?
    Deep Trading Agent provides a complete pipeline for algorithmic trading: data ingestion, environment simulation compliant with OpenAI Gym, deep RL model training (e.g., DQN, PPO, A2C), performance visualization, backtesting on historical data, and live deployment through broker API connectors. Users can define custom reward metrics, tune hyperparameters, and monitor agent performance in real time. The modular architecture supports stocks, forex, and cryptocurrency markets and allows seamless extension to new asset classes.
  • Python-based RL framework implementing deep Q-learning to train an AI agent for Chrome's offline dinosaur game.
    0
    0
    What is Dino Reinforcement Learning?
    Dino Reinforcement Learning offers a comprehensive toolkit for training an AI agent to play the Chrome dinosaur game via reinforcement learning. By integrating with a headless Chrome instance through Selenium, it captures real-time game frames and processes them into state representations optimized for deep Q-network inputs. The framework includes modules for replay memory, epsilon-greedy exploration, convolutional neural network models, and training loops with customizable hyperparameters. Users can monitor training progress via console logs and save checkpoints for later evaluation. Post-training, the agent can be deployed to play live games autonomously or benchmarked against different model architectures. The modular design allows easy substitution of RL algorithms, making it a flexible platform for experimentation.
  • Open source TensorFlow-based Deep Q-Network agent that learns to play Atari Breakout using experience replay and target networks.
    0
    0
    What is DQN-Deep-Q-Network-Atari-Breakout-TensorFlow?
    DQN-Deep-Q-Network-Atari-Breakout-TensorFlow provides a complete implementation of the DQN algorithm tailored for the Atari Breakout environment. It uses a convolutional neural network to approximate Q-values, applies experience replay to break correlations between sequential observations, and employs a periodically updated target network to stabilize training. The agent follows an epsilon-greedy policy for exploration and can be trained from scratch on raw pixel input. The repository includes configuration files, training scripts to monitor reward growth over episodes, evaluation scripts to test trained models, and TensorBoard utilities for visualizing training metrics. Users can adjust hyperparameters such as learning rate, replay buffer size, and batch size to experiment with different setups.
Featured