Comprehensive agentes IA Tools for Every Need

Get access to agentes IA solutions that address multiple requirements. One-stop resources for streamlined workflows.

agentes IA

  • A Python library enabling developers to build robust AI agents with state machines managing LLM-driven workflows.
    0
    0
    What is Robocorp LLM State Machine?
    LLM State Machine is an open-source Python framework designed to construct AI agents using explicit state machines. Developers define states as discrete steps—each invoking a large language model or custom logic—and transitions based on outputs. This approach provides clarity, maintainability, and robust error handling for multi-step, LLM-powered workflows, such as document processing, conversational bots, or automation pipelines.
  • A JavaScript framework to build AI agents with dynamic tool integration, memory, and workflow orchestration.
    0
    0
    What is Modus?
    Modus is a developer-focused framework that simplifies the creation of AI agents by providing core components for LLM integration, memory storage, and tool orchestration. It supports plugin-based tool libraries, enabling agents to perform tasks like data retrieval, analysis, and action execution. With built-in memory modules, agents can maintain conversational context and learn over interactions. Its extensible architecture accelerates AI development and deployment across various applications.
  • An open-source multi-agent reinforcement learning framework enabling raw-level agent control and coordination in StarCraft II via PySC2.
    0
    0
    What is MultiAgent-Systems-StarCraft2-PySC2-Raw?
    MultiAgent-Systems-StarCraft2-PySC2-Raw offers a complete toolkit for developing, training, and evaluating multiple AI agents in StarCraft II. It exposes low-level controls for unit movement, targeting, and abilities, while allowing flexible reward design and scenario configuration. Users can easily plug in custom neural network architectures, define team-based coordination strategies, and record metrics. Built on top of PySC2, it supports parallel training, checkpointing, and visualization, making it ideal for advancing research in cooperative and adversarial multi-agent reinforcement learning.
  • Trainable Agents is a Python framework enabling fine-tuning and interactive training of AI agents on custom tasks via human feedback.
    0
    0
    What is Trainable Agents?
    Trainable Agents is designed as a modular, extensible toolkit for rapid development and training of AI agents powered by state-of-the-art large language models. The framework abstracts core components such as interaction environments, policy interfaces, and feedback loops, enabling developers to define tasks, supply demonstrations, and implement reward functions effortlessly. With built-in support for OpenAI GPT and Anthropic Claude, the library facilitates experience replay, batch training, and performance evaluation. Trainable Agents also includes utilities for logging, metrics tracking, and exporting trained policies for deployment. Whether building conversational bots, automating workflows, or conducting research, this framework streamlines the entire lifecycle from prototyping to production in a unified Python package.
  • An AI agent suite using LangChain to simulate coffee shop roles like barista, cashier, and manager.
    0
    0
    What is Coffee-Shop-AI-Agents?
    Coffee-Shop-AI-Agents is an open-source framework for building and deploying specialized AI agents that automate key coffee shop functions. Leveraging LangChain and OpenAI models, the project provides modular agents, including a barista agent that handles complex beverage orders, offers customization recommendations, and manages ingredient availability. The cashier agent processes payments, issues digital receipts, and tracks sales metrics. A manager agent generates inventory forecasts, suggests restocking schedules, and analyzes performance data. With customizable prompts and pipeline configurations, developers can quickly adapt the agents to unique shop policies and menu items. The repository includes setup scripts, API integrations, and example workflows to simulate realistic customer interactions and operational analytics in a developer-friendly environment.
  • A Python sample demonstrating LLM-based AI agents with integrated tools like search, code execution, and QA.
    0
    0
    What is LLM Agents Example?
    LLM Agents Example provides a hands-on codebase for building AI agents in Python. It demonstrates registering custom tools (web search, math solver via WolframAlpha, CSV analyzer, Python REPL), creating chat and retrieval-based agents, and connecting to vector stores for document question answering. The repo illustrates patterns for maintaining conversational memory, dispatching tool calls dynamically, and chaining multiple LLM prompts to solve complex tasks. Users learn how to integrate third-party APIs, structure agent workflows, and extend the framework with new capabilities—serving as a practical guide for developer experimentation and prototyping.
  • pyafai is a Python modular framework to build, train, and run autonomous AI agents with plug-in memory and tool support.
    0
    0
    What is pyafai?
    pyafai is an open-source Python library designed to help developers architect, configure, and execute autonomous AI agents. It offers pluggable modules for memory management to retain context, tool integration for external API calls, observers for environment monitoring, planners for decision making, and an orchestrator to run agent loops. Logging and monitoring features provide visibility into agent performance and behavior. pyafai supports major LLM providers out of the box, enables custom module creation, and reduces boilerplate so teams can rapidly prototype virtual assistants, research bots, and automation workflows with full control over each component.
  • Open-source Python framework enabling developers to build AI agents with tool integration and multi-LLM support.
    0
    0
    What is X AI Agent?
    X AI Agent provides a modular architecture for building intelligent agents. It supports seamless integration with external tools and APIs, configurable memory modules, and multi-LLM orchestration. Developers can define custom skills, tool connectors, and workflows in code, then deploy agents that fetch data, generate content, automate processes, and handle complex dialogues autonomously.
  • A Python framework enabling developers to build, deploy, and manage decentralized Autonomous Economic Agents across blockchain and peer-to-peer networks
    0
    0
    What is Autonomous Economic Agents (AEA)?
    Autonomous Economic Agents (AEA) by Fetch.ai is a versatile framework that empowers developers to design, implement, and orchestrate autonomous software agents capable of interacting with each other, external environments, and digital ledgers. Leveraging a plugin-based architecture, AEA provides pre-built modules for communication protocols, cryptographic ledger APIs, decentralized identity, and customizable decision-making skills. Agents can discover and transact within decentralized marketplaces, perform goal-driven behaviors, and adapt through real-time data feeds. The framework supports simulation tools for testing and debugging multi-agent scenarios, as well as deployment onto live blockchains or peer-to-peer networks. With built-in interoperability and agent-to-agent messaging, AEA streamlines the development of complex autonomous economic applications such as energy trading, supply chain optimization, and smart IoT coordination.
  • HFO_DQN is a reinforcement learning framework that applies Deep Q-Network to train soccer agents in RoboCup Half Field Offense environment.
    0
    0
    What is HFO_DQN?
    HFO_DQN combines Python and TensorFlow to deliver a complete pipeline for training soccer agents using Deep Q-Networks. Users can clone the repository, install dependencies including the HFO simulator and Python libraries, and configure training parameters in YAML files. The framework implements experience replay, target network updates, epsilon-greedy exploration, and reward shaping tailored for the half field offense domain. It features scripts for agent training, performance logging, evaluation matches, and plotting results. Modular code structure allows integration of custom neural network architectures, alternative RL algorithms, and multi-agent coordination strategies. Outputs include trained models, performance metrics, and behavior visualizations, facilitating research in reinforcement learning and multi-agent systems.
Featured