Comprehensive 플러그인 지원 Tools for Every Need

Get access to 플러그인 지원 solutions that address multiple requirements. One-stop resources for streamlined workflows.

플러그인 지원

  • LLPhant is a lightweight Python framework for building modular, customizable LLM-based agents with tool integration and memory management.
    0
    0
    What is LLPhant?
    LLPhant is an open-source Python framework enabling developers to create versatile LLM-driven agents. It offers built-in abstractions for tool integration (APIs, search, databases), memory management for multi-turn conversations, and customizable decision loops. With support for multiple LLM backends (OpenAI, Hugging Face, others), plugin-style components, and configuration-driven workflows, LLPhant accelerates agent development. Use it to prototype chatbots, automate tasks, or build digital assistants that leverage external tools and contextual memory without boilerplate code.
  • A CLI-based AI Agent converting natural language instructions into shell commands to automate workflows and tasks.
    0
    0
    What is MCP-CLI-Agent?
    MCP-CLI-Agent is an open source, extensible AI Agent for the command line. Users write natural language prompts and the tool generates and runs corresponding shell commands, handles multi-step task chaining, and logs outputs. Built on top of GPT models, it supports custom plugins, configuration files, and context-aware execution, making it ideal for automating DevOps tasks, code generation, environment setup, and data fetching directly from the terminal.
  • Web platform for building AI agents with memory graphs, document ingestion, and plugin integration for task automation.
    0
    0
    What is Mindcore Labs?
    Mindcore Labs provides a no-code and developer-friendly environment to design and launch AI agents. It features a knowledge graph memory system that retains context over time, supports ingestion of documents and data sources, and integrates with external APIs and plugins. Users can configure agents via an intuitive UI or CLI, test them in real time, and deploy to production endpoints. Built-in monitoring and analytics help track performance and optimize agent behaviors.
  • An open-source framework orchestrating multiple specialized AI agents to autonomously generate research hypotheses, conduct experiments, analyze results, and draft papers.
    0
    0
    What is Multi-Agent AI Researcher?
    Multi-Agent AI Researcher provides a modular, extensible framework where users can configure and deploy multiple AI agents to collaboratively tackle complex scientific inquiries. It includes a hypothesis generation agent that proposes research directions based on literature analysis, an experiment simulation agent that models and tests hypotheses, a data analysis agent that processes simulation outputs, and a drafting agent that compiles findings into structured research documents. With plugin support, users can incorporate custom models and data sources. The orchestrator manages agent interactions, logging each step for traceability. Ideal for automating repetitive tasks and accelerating R&D workflows, it ensures reproducibility and scalability across diverse research domains.
  • Camel is an open-source AI agent orchestration framework enabling multi-agent collaboration, tool integration, and planning with LLMs & knowledge graphs.
    0
    0
    What is Camel AI?
    Camel AI is an open-source framework designed to simplify the creation and orchestration of intelligent agents. It offers abstractions for chaining large language models, integrating external tools and APIs, managing knowledge graphs, and persisting memory. Developers can define multi-agent workflows, decompose tasks into subplans, and monitor execution through a CLI or web UI. Built on Python and Docker, Camel AI allows seamless swapping of LLM providers, custom tool plugins, and hybrid planning strategies, accelerating development of automated assistants, data pipelines, and autonomous workflows at scale.
  • Notte is an open-source Python framework for building customizable AI agents with memory, tool integration, and multi-step reasoning.
    0
    0
    What is Notte?
    Notte is a developer-centric Python framework designed for orchestrating AI agents powered by large language models. It provides built-in memory modules to store and retrieve conversational context, flexible tool integration for external APIs or custom functions, and a planning engine that sequences tasks. With Notte, you can rapidly prototype conversational assistants, data analysis bots, or automated workflows, while benefiting from open-source extensibility and cross-platform support.
  • Spigot is a high-performance Minecraft server solution.
    0
    0
    What is Spigot?
    Spigot is a high-performance Minecraft server solution that enhances gameplay by offering a more optimized, customizable experience. It is a fork of CraftBukkit with added performance tweaks and features, making it an ideal choice for players and server administrators seeking a smoother, more responsive gaming environment. Spigot also supports a wide range of plugins, allowing for extensive customization of gameplay mechanics and aesthetics. Whether you are running small private servers or large public ones, Spigot adapts to your needs by offering enhanced server performance and flexibility.
  • Swarms is an open-source platform to build, orchestrate, and deploy collaborative multi-agent AI systems with customizable workflows.
    0
    0
    What is Swarms?
    Swarms operates as a Python-first framework and web-based interface, empowering users to configure individual agents with specific roles, memory management, and custom prompts. Users define agent interactions through a visual flow builder or YAML configuration, orchestrating complex decision trees, debates, and collaborative tasks. The platform supports plugin integration for data querying, knowledge base access, and third-party API calls. Upon deployment, Swarms provides real-time monitoring of agent activities, performance metrics, and logs. It scales horizontally using container orchestration tools, enabling large-scale AI simulations, robotic control architectures, or intelligent workflow automations. The open-source architecture ensures extensibility, community-driven enhancements, and self-hosting options for full data control.
  • A minimal Python framework to create autonomous GPT-powered AI agents with tool integration and memory.
    0
    0
    What is TinyAgent?
    TinyAgent provides a lightweight agent framework for orchestrating complex tasks with OpenAI GPT models. Developers install via pip, configure an API key, define tools or plugins, and leverage in-memory context to maintain multi-step conversations. TinyAgent supports chaining tasks, integrating external APIs, and persisting user or system memories. Its simple Pythonic API lets you prototype autonomous data analysis workflows, customer service chatbots, code generation assistants, or any use case requiring an intelligent, stateful agent. The library remains fully open-source, extensible, and platform-agnostic.
  • HyperChat enables multi-model AI chat with memory management, streaming responses, function calling, and plugin integration in applications.
    0
    0
    What is HyperChat?
    HyperChat is a developer-centric AI agent framework that simplifies embedding conversational AI into applications. It unifies connections to various LLM providers, handles session context and memory persistence, and delivers streamed partial replies for responsive UIs. Built-in function calling and plugin support enable executing external APIs, enriching conversations with real-world data and actions. Its modular architecture and UI toolkit allow rapid prototyping and production-grade deployments across web, Electron, and Node.js environments.
  • AIBrokers orchestrates multiple AI models and agents, enabling dynamic task routing, conversation management, and plugin integration.
    0
    0
    What is AIBrokers?
    AIBrokers provides a unified interface for managing and executing workflows that involve multiple AI agents and models. It allows developers to define brokers that oversee task distribution, selecting the most suitable model—such as GPT-4 for language tasks or a vision model for image analysis—based on customizable routing rules. ConversationManager supports context awareness by storing and retrieving past dialogues, while the MemoryStore module offers persistent state handling across sessions. PluginManager enables seamless integration of external APIs or custom functions, extending the broker’s capabilities. With built-in logging, monitoring hooks, and customizable error handling, AIBrokers simplifies the development and deployment of complex AI-driven applications in production environments.
  • A JavaScript SDK for building and running Azure AI Agents with chat, function calling, and orchestration features.
    0
    0
    What is Azure AI Agents JavaScript SDK?
    The Azure AI Agents JavaScript SDK is a client framework and sample code repository that enables developers to build, customize, and orchestrate AI agents using Azure OpenAI and other cognitive services. It offers support for multi-turn chat, retrieval-augmented generation, function calling, and integration with external tools and APIs. Developers can manage agent workflows, handle memory, and extend capabilities via plugins. Sample patterns include knowledge base Q&A bots, autonomous task executors, and conversational assistants, making it easy to prototype and deploy intelligent solutions.
  • Hive is a Node.js framework enabling orchestration of multi-agent AI workflows with memory management and tool integrations.
    0
    0
    What is Hive?
    Hive is a robust AI agent orchestration platform built for Node.js environments. It provides a modular system for defining, managing, and executing multiple AI agents in parallel or sequential workflows. Each agent can be configured with specific roles, prompt templates, memory stores, and external tool integrations such as APIs or plugins. Hive streamlines communication paths between agents, enabling data sharing, decision-making, and task delegation. Its extensible design allows developers to implement custom utilities, monitor execution logs, and deploy agents at scale. Hive also includes features like error handling, retry policies, and performance optimizations to ensure reliable automation. With minimal setup, teams can prototype complex AI-driven services, including chatbots, data analysis pipelines, and content generators.
  • Junjo Python API offers Python developers seamless integration of AI agents, tool orchestration, and memory management in applications.
    0
    0
    What is Junjo Python API?
    Junjo Python API is an SDK that empowers developers to integrate AI agents into Python applications. It provides a unified interface for defining agents, connecting to LLMs, orchestrating tools like web search, databases, or custom functions, and maintaining conversational memory. Developers can build chains of tasks with conditional logic, stream responses to clients, and handle errors gracefully. The API supports plugin extensions, multilingual processing, and real-time data retrieval, enabling use cases from automated customer support to data analysis bots. With comprehensive documentation, code samples, and Pythonic design, Junjo Python API reduces time-to-market and operational overhead of deploying intelligent agent-based solutions.
  • LLM-Agent is a Python library for creating LLM-based agents that integrate external tools, execute actions, and manage workflows.
    0
    0
    What is LLM-Agent?
    LLM-Agent provides a structured architecture for building intelligent agents using LLMs. It includes a toolkit for defining custom tools, memory modules for context preservation, and executors that orchestrate complex chains of actions. Agents can call APIs, run local processes, query databases, and manage conversational state. Prompt templates and plugin hooks allow fine-tuning of agent behavior. Designed for extensibility, LLM-Agent supports adding new tool interfaces, custom evaluators, and dynamic routing of tasks, enabling automated research, data analysis, code generation, and more.
  • Open-source multi-agent AI framework enabling customizable LLM-driven bots for efficient task automation and conversational workflows.
    0
    0
    What is LLMLing Agent?
    LLMLing Agent is a modular framework for building, configuring, and deploying AI agents powered by large language models. Users can instantiate multiple agent roles, connect external tools or APIs, manage conversational memory, and orchestrate complex workflows. The platform includes a browser-based playground that visualizes agent interactions, logs message history, and allows real-time adjustments. With a Python SDK, developers can script custom behaviors, integrate vector databases, and extend the system through plugins. LLMLing Agent streamlines creation of chatbots, data analysis bots, and automated assistants by providing reusable components and clear abstractions for multi-agent collaboration.
  • Self-hosted AI agent management platform enabling creation, customization, and deployment of GPT-based chatbots with memory and plugin support.
    0
    0
    What is RainbowGPT?
    RainbowGPT provides a complete framework for designing, customizing, and deploying AI agents powered by OpenAI models. It includes a FastAPI backend, LangChain integration for tool and memory management, and a React-based UI for agent creation and testing. Users can upload documents for vector-based knowledge retrieval, define custom prompts and behaviors, and connect external APIs or functions. The platform logs interactions for analysis and supports multi-agent workflows, enabling complex automation and conversational pipelines.
  • Rolodexter 3 orchestrates modular AI agents that collaborate to automate complex tasks via customizable prompts and integrated memory.
    0
    0
    What is Rolodexter 3?
    Rolodexter 3 enables you to build, customize, and orchestrate autonomous AI agents that work together to complete multi-step processes. Each agent can be assigned a specific role with tailored prompts, access external tools or APIs, and store or retrieve memory across sessions. The platform features an intuitive web UI for monitoring agent activity, logs, and results in real time. Developers can extend the system with custom plugins or integrate new data sources, making it ideal for rapid prototyping, research automation, and complex task delegation.
  • An AI Agent framework enabling multiple autonomous agents to self-coordinate and collaborate on complex tasks using conversational workflows.
    0
    0
    What is Self Collab AI?
    Self Collab AI provides a modular framework where developers define autonomous agents, communication channels, and task objectives. Agents use predefined prompts and patterns to negotiate responsibilities, exchange data, and iterate on solutions. Built on Python and easy-to-extend interfaces, it supports integration with LLMs, custom plugins, and external APIs. Teams can rapidly prototype complex workflows—such as research assistants, content generation, or data analysis pipelines—by configuring agent roles and collaboration rules without deep orchestration code.
  • SuperBot is a Python-based AI Agent framework offering CLI interface, plugin support, function calling, and memory management.
    0
    0
    What is SuperBot?
    SuperBot is a comprehensive AI Agent framework enabling developers to deploy autonomous, context-aware assistants via Python and the command line. It integrates OpenAI’s chat models with a memory system, function-calling features, and plugin architecture. Agents can execute shell commands, run code, interact with files, perform web searches, and maintain conversation state. SuperBot supports multi-agent orchestration for complex workflows, all configurable through simple Python scripts and CLI commands. Its extensible design allows you to add custom tools, automate tasks, and integrate external APIs to build robust AI-driven applications.
Featured