Comprehensive 自訂插件 Tools for Every Need

Get access to 自訂插件 solutions that address multiple requirements. One-stop resources for streamlined workflows.

自訂插件

  • Esquilax is a TypeScript framework for orchestrating multi-agent AI workflows, managing memory, context, and plugin integrations.
    0
    0
    What is Esquilax?
    Esquilax is a lightweight TypeScript framework designed for building and orchestrating complex AI agent workflows. It provides developers with a clear API to declaratively define agents, assign memory modules, and integrate custom plugin actions such as API calls or database queries. With built-in support for context handling and multi-agent coordination, Esquilax streamlines the creation of chatbots, digital assistants, and automated processes. Its event-driven architecture allows tasks to be chained or triggered dynamically, while logging and debugging tools offer full visibility into agent interactions. By abstracting away boilerplate code, Esquilax helps teams rapidly prototype scalable AI-driven applications.
  • A lightweight web-based AI agent platform enabling developers to deploy and customize conversational bots with API integrations.
    0
    0
    What is Lite Web Agent?
    Lite Web Agent is a browser-native platform that allows users to create, configure, and deploy AI-driven conversational agents. It offers a visual flow builder, support for REST and WebSocket API integrations, state persistence, and plugin hooks for custom logic. Agents run fully on the client side for low latency and privacy, while optional server connectors enable data storage and advanced processing. It is ideal for embedding chatbots on websites, intranets, or applications without complex backend setups.
  • Live embeds a context-aware AI assistant into any website for content generation, summarization, data extraction, and task automation.
    0
    0
    What is Live by Vroom AI?
    Live by Vroom AI is an open framework and browser extension that brings AI agents directly into your web browsing experience. By installing Live, you gain access to a sidebar AI assistant that understands page context and performs tasks such as generating marketing copy, summarizing articles, extracting structured data, filling forms automatically, and answering domain-specific questions. Developers can extend Live with custom plugins using its SDK and integrate their own LLM models or third-party APIs to tailor the agent to specific workflows.
  • LLMFlow is an open-source framework enabling the orchestration of LLM-based workflows with tool integration and flexible routing.
    0
    0
    What is LLMFlow?
    LLMFlow provides a declarative way to design, test, and deploy complex language model workflows. Developers create Nodes which represent prompts or actions, then chain them into Flows that can branch based on conditions or external tool outputs. Built-in memory management tracks context between steps, while adapters enable seamless integration with OpenAI, Hugging Face, and others. Extend functionality via plugins for custom tools or data sources. Execute Flows locally, in containers, or as serverless functions. Use cases include creating conversational agents, automated report generation, and data extraction pipelines—all with transparent execution and logging.
  • Melissa is an AI-powered personal assistant that manages tasks, automates workflows, and answers queries through natural language chat.
    0
    0
    What is Melissa?
    Melissa operates as a conversational AI agent that uses advanced natural language understanding to interpret user commands, generate context-aware responses, and perform automated tasks. It provides features such as task scheduling, appointment reminders, data lookup, and integration with external APIs like Google Calendar, Slack, and email services. Users can extend Melissa’s capabilities through custom plugins, create workflows for repetitive processes, and access its knowledge base for quick information retrieval. As an open-source project, developers can self-host Melissa on cloud or local servers, configure permissions, and tailor its behavior to suit organizational requirements or personal preferences, making it a flexible solution for productivity, customer support, and digital assistance.
  • An open-source AI agent framework enabling automated planning, tool integration, decision-making, and workflow orchestration with LLMs.
    0
    0
    What is MindForge?
    MindForge is a robust orchestration framework designed for building and deploying AI-driven agents with minimal boilerplate. It offers a modular architecture comprising a task planner, reasoning engine, memory manager, and tool execution layer. By leveraging LLMs, agents can parse user input, formulate plans, and invoke external tools—such as web scraping APIs, databases, or custom scripts—to accomplish complex tasks. Memory components store conversational context, enabling multi-turn interactions, while the decision engine dynamically selects actions based on defined policies. With plugin support and customizable pipelines, developers can extend functionality to include custom tools, third-party integrations, and domain-specific knowledge bases. MindForge simplifies AI agent development, facilitating rapid prototyping and scalable deployment in production environments.
  • A lightweight Python framework to orchestrate LLM-powered agents with tool integration, memory, and customizable action loops.
    0
    0
    What is Python AI Agent?
    Python AI Agent provides a developer-friendly toolkit to orchestrate autonomous agents driven by large language models. It offers built-in mechanisms for defining custom tools and actions, maintaining conversation history with memory modules, and streaming responses for interactive experiences. Users can extend its plugin architecture to integrate APIs, databases, and external services, enabling agents to fetch data, perform computations, and automate workflows. The library supports configurable pipelines, error handling, and logging for robust deployments. With minimal boilerplate, developers can build chatbots, virtual assistants, data analyzers, or task automators that leverage LLM reasoning and multi-step decision making. The open-source nature encourages community contributions and adapts to any Python environment.
  • Saiki is a framework to define, chain, and monitor autonomous AI agents through simple YAML configs and REST APIs.
    0
    0
    What is Saiki?
    Saiki is an open-source agent orchestration framework that empowers developers to build complex AI-driven workflows by writing declarative YAML definitions. Each agent can perform tasks, call external services, or invoke other agents in a chained sequence. Saiki provides a built-in REST API server, execution tracing, detailed log output, and a web-based dashboard for real-time monitoring. It supports retries, fallbacks, and custom extensions, making it easy to iterate, debug, and scale robust automation pipelines.
  • Open-source framework to deploy autonomous AI agents on serverless cloud functions for scalable workflow automation.
    0
    0
    What is Serverless AI Agent?
    Serverless AI Agent simplifies the creation and deployment of autonomous AI agents by leveraging serverless cloud functions. By defining agent behaviors in simple configuration files, developers can enable AI-driven workflows that process natural language input, interact with APIs, execute database queries, and emit events. The framework abstracts infrastructure concerns, automatically scaling agent functions in response to demand. With built-in state persistence, logging, and error handling, Serverless AI Agent supports reliable long-running tasks, scheduled jobs, and event-driven automations. Developers can integrate custom middleware, choose from multiple cloud providers, and extend the agent’s capabilities with plugins for monitoring, authentication, and data storage. This results in rapid prototyping and deployment of robust AI-powered solutions.
  • Web-Agent is a browser-based AI agent library enabling automated web interactions, scraping, navigation, and form filling using natural language commands.
    0
    0
    What is Web-Agent?
    Web-Agent is a Node.js library designed to turn natural language instructions into browser operations. It integrates with popular LLM providers (OpenAI, Anthropic, etc.) and controls headless or headful browsers to perform actions like scraping page data, clicking buttons, filling out forms, navigating multi-step workflows, and exporting results. Developers can define agent behaviors in code or JSON, extend via plugins, and chain tasks to build complex automation flows. It simplifies tedious web tasks, testing, and data gathering by letting AI interpret and execute them.
  • An AI Agent platform automating data science workflows by generating code, querying databases, and visualizing data seamlessly.
    0
    0
    What is Cognify?
    Cognify enables users to define data science goals and lets AI Agents handle the heavy lifting. Agents can write and debug code, connect to databases for querying insights, produce interactive visualizations, and even export reports. With a plugin architecture, users can extend functionality to custom APIs, scheduling systems, and cloud services. Cognify offers reproducibility, collaboration features, and logging to track agent decisions and outputs, making it suitable for rapid prototyping and production workflows.
  • An open-source AI agent framework enabling modular planning, memory management, and tool integration for automated, multi-step workflows.
    0
    0
    What is Pillar?
    Pillar is a comprehensive AI agent framework designed to simplify the development and deployment of intelligent multi-step workflows. It features a modular architecture with planners for task decomposition, memory stores for context retention, and executors that perform actions via external APIs or custom code. Developers can define agent pipelines in YAML or JSON, integrate any LLM provider, and extend functionality through custom plugins. Pillar handles asynchronous execution and context management out of the box, reducing boilerplate code and accelerating time-to-market for AI-driven applications such as chatbots, data analysis assistants, and automated business processes.
  • PrisimAI lets you visually design, test, and deploy AI agents integrating LLMs, APIs, and memory in a single platform.
    0
    0
    What is PrisimAI?
    PrisimAI provides a browser-based environment where users can rapidly prototype and deploy intelligent agents. Through a visual flow builder, you can assemble LLM-powered components, integrate external APIs, manage long-term memory, and orchestrate multi-step tasks. Built-in debugging and monitoring simplify testing and iteration, while a plugin marketplace allows extension with custom tools. PrisimAI supports collaboration across teams, version control for agent designs, and one-click deployment for webhooks, chat widgets, or standalone services.
  • A TypeScript framework for building and customizing LangChain AI agents with tool integration and memory management.
    0
    0
    What is Agents from Scratch TS?
    Agents from Scratch TS is an open-source TypeScript framework that demonstrates how to build AI agents from the ground up using LangChain. It includes sample code for defining and registering external tools, managing conversational memory, routing user inputs to the right agent, and chaining multiple LLM calls. Developers can use it to understand best practices, customize agent behaviors, and integrate new capabilities such as web search, data retrieval, or custom plugins to automate tasks or build interactive assistants.
  • Aladin is an open-source autonomous LLM agent enabling scripted workflows, memory-enabled decision-making, and plugin-based task orchestration.
    0
    0
    What is Aladin?
    Aladin provides a modular architecture that allows developers to define autonomous agents powered by large language models (LLMs). Each agent can load memory backends (e.g., SQLite, in-memory), utilize dynamic prompt templates, and integrate custom plugins for external API calls or local command execution. It features a task planner that breaks high-level goals into sequenced actions, executing them in order and iterating based on LLM feedback. Configuration is managed through YAML files and environment variables, making it adaptable to various use cases. Users can deploy Aladin via Docker Compose or pip installation. The CLI and FastAPI-based HTTP endpoints let users trigger agents, monitor execution, and inspect memory states, facilitating integration with CI/CD pipelines, chat interfaces, or custom dashboards.
  • A Python-based autonomous AI Agent framework providing memory, reasoning, and tool integration for multi-step task automation.
    0
    0
    What is CereBro?
    CereBro offers a modular architecture for creating AI agents capable of self-directed task decomposition, persistent memory, and dynamic tool usage. It includes a Brain core managing thoughts, actions, and memory, supports custom plugins for external APIs, and provides a CLI interface for orchestration. Users can define agent goals, configure reasoning strategies, and integrate functions such as web search, file operations, or domain-specific tools to execute tasks end-to-end without manual intervention.
  • ClassiCore-Public automates ML classification, offering data preprocessing, model selection, hyperparameter tuning, and scalable API deployment.
    0
    0
    What is ClassiCore-Public?
    ClassiCore-Public provides a comprehensive environment for building, optimizing, and deploying classification models. It features an intuitive pipeline builder that handles raw data ingestion, cleaning, and feature engineering. The built-in model zoo includes algorithms like Random Forests, SVMs, and deep learning architectures. Automated hyperparameter tuning uses Bayesian optimization to find optimal settings. Trained models can be deployed as RESTful APIs or microservices, with monitoring dashboards tracking performance metrics in real time. Extensible plugins let developers add custom preprocessing, visualization, or new deployment targets, making ClassiCore-Public ideal for industrial-scale classification tasks.
Featured