Comprehensive soporte de plugins Tools for Every Need

Get access to soporte de plugins solutions that address multiple requirements. One-stop resources for streamlined workflows.

soporte de plugins

  • Flock is a TypeScript framework that orchestrates LLMs, tools, and memory to build autonomous AI agents.
    0
    0
    What is Flock?
    Flock provides a developer-friendly, modular framework for chaining multiple LLM calls, managing conversational memory, and integrating external tools into autonomous agents. With support for asynchronous execution and plugin extensions, Flock enables fine-grained control over agent behaviors, triggers, and context handling. It works seamlessly in Node.js and browser environments, letting teams rapidly prototype chatbots, data-processing workflows, virtual assistants, and other AI-driven automation solutions.
  • Open-source Python framework enabling developers to build contextual AI agents with memory, tool integration, and LLM orchestration.
    0
    0
    What is Nestor?
    Nestor offers a modular architecture to assemble AI agents that maintain conversation state, invoke external tools, and customize processing pipelines. Key features include session-based memory stores, a registry for tool functions or plugins, flexible prompt templating, and unified LLM client interfaces. Agents can execute sequential tasks, perform decision branching, and integrate with REST APIs or local scripts. Nestor is framework-agnostic, enabling users to work with OpenAI, Azure, or self-hosted LLM providers.
  • LLPhant is a lightweight Python framework for building modular, customizable LLM-based agents with tool integration and memory management.
    0
    0
    What is LLPhant?
    LLPhant is an open-source Python framework enabling developers to create versatile LLM-driven agents. It offers built-in abstractions for tool integration (APIs, search, databases), memory management for multi-turn conversations, and customizable decision loops. With support for multiple LLM backends (OpenAI, Hugging Face, others), plugin-style components, and configuration-driven workflows, LLPhant accelerates agent development. Use it to prototype chatbots, automate tasks, or build digital assistants that leverage external tools and contextual memory without boilerplate code.
  • An open-source framework orchestrating multiple specialized AI agents to autonomously generate research hypotheses, conduct experiments, analyze results, and draft papers.
    0
    0
    What is Multi-Agent AI Researcher?
    Multi-Agent AI Researcher provides a modular, extensible framework where users can configure and deploy multiple AI agents to collaboratively tackle complex scientific inquiries. It includes a hypothesis generation agent that proposes research directions based on literature analysis, an experiment simulation agent that models and tests hypotheses, a data analysis agent that processes simulation outputs, and a drafting agent that compiles findings into structured research documents. With plugin support, users can incorporate custom models and data sources. The orchestrator manages agent interactions, logging each step for traceability. Ideal for automating repetitive tasks and accelerating R&D workflows, it ensures reproducibility and scalability across diverse research domains.
  • Notte is an open-source Python framework for building customizable AI agents with memory, tool integration, and multi-step reasoning.
    0
    0
    What is Notte?
    Notte is a developer-centric Python framework designed for orchestrating AI agents powered by large language models. It provides built-in memory modules to store and retrieve conversational context, flexible tool integration for external APIs or custom functions, and a planning engine that sequences tasks. With Notte, you can rapidly prototype conversational assistants, data analysis bots, or automated workflows, while benefiting from open-source extensibility and cross-platform support.
  • HyperChat enables multi-model AI chat with memory management, streaming responses, function calling, and plugin integration in applications.
    0
    0
    What is HyperChat?
    HyperChat is a developer-centric AI agent framework that simplifies embedding conversational AI into applications. It unifies connections to various LLM providers, handles session context and memory persistence, and delivers streamed partial replies for responsive UIs. Built-in function calling and plugin support enable executing external APIs, enriching conversations with real-world data and actions. Its modular architecture and UI toolkit allow rapid prototyping and production-grade deployments across web, Electron, and Node.js environments.
  • AIBrokers orchestrates multiple AI models and agents, enabling dynamic task routing, conversation management, and plugin integration.
    0
    0
    What is AIBrokers?
    AIBrokers provides a unified interface for managing and executing workflows that involve multiple AI agents and models. It allows developers to define brokers that oversee task distribution, selecting the most suitable model—such as GPT-4 for language tasks or a vision model for image analysis—based on customizable routing rules. ConversationManager supports context awareness by storing and retrieving past dialogues, while the MemoryStore module offers persistent state handling across sessions. PluginManager enables seamless integration of external APIs or custom functions, extending the broker’s capabilities. With built-in logging, monitoring hooks, and customizable error handling, AIBrokers simplifies the development and deployment of complex AI-driven applications in production environments.
  • A JavaScript SDK for building and running Azure AI Agents with chat, function calling, and orchestration features.
    0
    0
    What is Azure AI Agents JavaScript SDK?
    The Azure AI Agents JavaScript SDK is a client framework and sample code repository that enables developers to build, customize, and orchestrate AI agents using Azure OpenAI and other cognitive services. It offers support for multi-turn chat, retrieval-augmented generation, function calling, and integration with external tools and APIs. Developers can manage agent workflows, handle memory, and extend capabilities via plugins. Sample patterns include knowledge base Q&A bots, autonomous task executors, and conversational assistants, making it easy to prototype and deploy intelligent solutions.
  • Junjo Python API offers Python developers seamless integration of AI agents, tool orchestration, and memory management in applications.
    0
    0
    What is Junjo Python API?
    Junjo Python API is an SDK that empowers developers to integrate AI agents into Python applications. It provides a unified interface for defining agents, connecting to LLMs, orchestrating tools like web search, databases, or custom functions, and maintaining conversational memory. Developers can build chains of tasks with conditional logic, stream responses to clients, and handle errors gracefully. The API supports plugin extensions, multilingual processing, and real-time data retrieval, enabling use cases from automated customer support to data analysis bots. With comprehensive documentation, code samples, and Pythonic design, Junjo Python API reduces time-to-market and operational overhead of deploying intelligent agent-based solutions.
  • LLM-Agent is a Python library for creating LLM-based agents that integrate external tools, execute actions, and manage workflows.
    0
    0
    What is LLM-Agent?
    LLM-Agent provides a structured architecture for building intelligent agents using LLMs. It includes a toolkit for defining custom tools, memory modules for context preservation, and executors that orchestrate complex chains of actions. Agents can call APIs, run local processes, query databases, and manage conversational state. Prompt templates and plugin hooks allow fine-tuning of agent behavior. Designed for extensibility, LLM-Agent supports adding new tool interfaces, custom evaluators, and dynamic routing of tasks, enabling automated research, data analysis, code generation, and more.
  • Self-hosted AI agent management platform enabling creation, customization, and deployment of GPT-based chatbots with memory and plugin support.
    0
    0
    What is RainbowGPT?
    RainbowGPT provides a complete framework for designing, customizing, and deploying AI agents powered by OpenAI models. It includes a FastAPI backend, LangChain integration for tool and memory management, and a React-based UI for agent creation and testing. Users can upload documents for vector-based knowledge retrieval, define custom prompts and behaviors, and connect external APIs or functions. The platform logs interactions for analysis and supports multi-agent workflows, enabling complex automation and conversational pipelines.
  • Sys-Agent is a self-hosted AI-driven personal assistant enabling CLI command execution, file management, and system monitoring via natural language.
    0
    0
    What is Sys-Agent?
    Sys-Agent provides a secure, self-hosted environment where users issue natural language instructions to perform system-level tasks. It connects with AI backends like OpenAI, local LLMs or other model services, translating prompts into shell commands, file operations, and infrastructure checks. Users can customize prompts, define task templates, scale through Docker or Kubernetes, and extend functionality via plugins. Sys-Agent logs all actions and offers audit trails to ensure transparency and security.
  • A macOS menu bar app providing AI-driven text summary, translation, code generation, image creation, and custom automations.
    0
    0
    What is Toolbox-macos?
    Toolbox-macos transforms your Mac into an AI agent hub by embedding a versatile set of AI-powered tools in a native menu bar app. It leverages OpenAI's GPT models and other APIs to enable you to select any text, summarize content, translate between languages, generate code, create custom images, search the web, or automate workflows with custom scripts and plugins. You can configure global hotkeys, define macros, and integrate third-party AI services to tailor responses. By offering instant AI capabilities across all applications without context switching, it enhances productivity, speeds up creative tasks, and centralizes your favorite AI utilities. Users can invoke commands via the macOS Command Palette or through configurable keyboard shortcuts, ensuring seamless integration with editing, browsing, or code development workflows. The open architecture allows community-driven extensions and supports local AI model execution for privacy-sensitive tasks.
  • UniChat is a cross-platform desktop AI chat client unifying multiple language models like OpenAI, Claude, and local models.
    0
    0
    What is UniChat?
    UniChat serves as a unified interface for interacting with various AI language models and chat services, enabling users to conduct conversations with multiple providers from one desktop application. It integrates online APIs—such as OpenAI GPT-3, GPT-4, Anthropic Claude, and Google PaLM—alongside local models like GPT4All or LLaMA. The client supports features such as conversation history storage, exportable chat logs, customizable prompt templates, file upload for context, and theming options. A plugin system allows developers and the community to add new capabilities, connectors, or UI enhancements. By managing API keys centrally and providing offline mode for local models, UniChat gives users complete control over their AI interactions, privacy, and costs.
  • AGNO Agent UI offers customizable React components and hooks for building streaming-enabled AI Agent chat interfaces in web apps.
    0
    0
    What is AGNO Agent UI?
    AGNO Agent UI is a React component library optimized for constructing AI Agent chat experiences. It includes prebuilt chat windows, message bubbles, input forms, loading indicators, and error-handling patterns. Developers can leverage real-time streaming of model responses, manage conversation state with custom hooks, and theme components to match their brand. The library integrates with popular agent frameworks such as LangChain, enabling multi-step workflows and plugin support. With responsive design and ARIA compliance, AGNO Agent UI ensures accessible, cross-device interactions, letting teams focus on agent logic rather than UI scaffolding.
  • AgentMesh orchestrates multiple AI agents in Python, enabling asynchronous workflows and specialized task pipelines using a mesh network.
    0
    0
    What is AgentMesh?
    AgentMesh provides a modular infrastructure for developers to create networks of AI agents, each focusing on a specific task or domain. Agents can be dynamically discovered and registered at runtime, exchange messages asynchronously, and follow configurable routing rules. The framework handles retries, fallbacks, and error recovery, allowing multi-agent pipelines for data processing, decision support, or conversational use cases. It integrates easily with existing LLMs and custom models via a simple plugin interface.
  • AutoGen UI is a React-based toolkit to build interactive UIs and dashboards for orchestrating multi-agent AI agent conversations.
    0
    0
    What is AutoGen UI?
    AutoGen UI is a frontend toolkit designed to render and manage multi-agent conversational flows. It offers ready-made components such as chat windows, agent selectors, message timelines, and debugging panels. Developers can configure multiple AI agents, stream responses in real time, log each step of the conversation, and apply custom styling. It integrates easily with backend orchestration libraries to provide a complete end-to-end interface for building and monitoring AI agent interactions.
  • Autogpt is a Rust library for building autonomous AI agents that interact with the OpenAI API to complete multi-step tasks
    0
    0
    What is autogpt?
    Autogpt is a developer-focused Rust framework for constructing autonomous AI agents. It offers typed interfaces to the OpenAI API, built-in memory handling, context chaining, and extensible plugin support. Agents can be configured to perform chained prompts, maintain conversation state, and execute dynamic tasks programmatically. Suitable for embedding in CLI tools, backend services, or research prototypes, Autogpt simplifies orchestration of complex AI workflows while leveraging Rust’s performance and safety guarantees.
  • A CLI toolkit to scaffold, test, and deploy autonomous AI agents with built-in workflows and LLM integrations.
    0
    0
    What is Build with ADK?
    Build with ADK streamlines the creation of AI agents by providing a CLI scaffolding tool, workflow definitions, LLM integration modules, testing utilities, logging, and deployment support. Developers can initialize agent projects, select AI models, configure prompts, connect external tools or APIs, run local tests, and push their agents to production or container platforms—all with simple commands. The modular architecture allows easy extension with plugins and supports multiple programming languages for maximum flexibility.
Featured