Comprehensive custom plugins Tools for Every Need

Get access to custom plugins solutions that address multiple requirements. One-stop resources for streamlined workflows.

custom plugins

  • ClassiCore-Public automates ML classification, offering data preprocessing, model selection, hyperparameter tuning, and scalable API deployment.
    0
    0
    What is ClassiCore-Public?
    ClassiCore-Public provides a comprehensive environment for building, optimizing, and deploying classification models. It features an intuitive pipeline builder that handles raw data ingestion, cleaning, and feature engineering. The built-in model zoo includes algorithms like Random Forests, SVMs, and deep learning architectures. Automated hyperparameter tuning uses Bayesian optimization to find optimal settings. Trained models can be deployed as RESTful APIs or microservices, with monitoring dashboards tracking performance metrics in real time. Extensible plugins let developers add custom preprocessing, visualization, or new deployment targets, making ClassiCore-Public ideal for industrial-scale classification tasks.
  • Esquilax is a TypeScript framework for orchestrating multi-agent AI workflows, managing memory, context, and plugin integrations.
    0
    0
    What is Esquilax?
    Esquilax is a lightweight TypeScript framework designed for building and orchestrating complex AI agent workflows. It provides developers with a clear API to declaratively define agents, assign memory modules, and integrate custom plugin actions such as API calls or database queries. With built-in support for context handling and multi-agent coordination, Esquilax streamlines the creation of chatbots, digital assistants, and automated processes. Its event-driven architecture allows tasks to be chained or triggered dynamically, while logging and debugging tools offer full visibility into agent interactions. By abstracting away boilerplate code, Esquilax helps teams rapidly prototype scalable AI-driven applications.
  • A lightweight web-based AI agent platform enabling developers to deploy and customize conversational bots with API integrations.
    0
    0
    What is Lite Web Agent?
    Lite Web Agent is a browser-native platform that allows users to create, configure, and deploy AI-driven conversational agents. It offers a visual flow builder, support for REST and WebSocket API integrations, state persistence, and plugin hooks for custom logic. Agents run fully on the client side for low latency and privacy, while optional server connectors enable data storage and advanced processing. It is ideal for embedding chatbots on websites, intranets, or applications without complex backend setups.
  • Live embeds a context-aware AI assistant into any website for content generation, summarization, data extraction, and task automation.
    0
    0
    What is Live by Vroom AI?
    Live by Vroom AI is an open framework and browser extension that brings AI agents directly into your web browsing experience. By installing Live, you gain access to a sidebar AI assistant that understands page context and performs tasks such as generating marketing copy, summarizing articles, extracting structured data, filling forms automatically, and answering domain-specific questions. Developers can extend Live with custom plugins using its SDK and integrate their own LLM models or third-party APIs to tailor the agent to specific workflows.
  • LLMFlow is an open-source framework enabling the orchestration of LLM-based workflows with tool integration and flexible routing.
    0
    0
    What is LLMFlow?
    LLMFlow provides a declarative way to design, test, and deploy complex language model workflows. Developers create Nodes which represent prompts or actions, then chain them into Flows that can branch based on conditions or external tool outputs. Built-in memory management tracks context between steps, while adapters enable seamless integration with OpenAI, Hugging Face, and others. Extend functionality via plugins for custom tools or data sources. Execute Flows locally, in containers, or as serverless functions. Use cases include creating conversational agents, automated report generation, and data extraction pipelines—all with transparent execution and logging.
  • Open-source framework to build AI personal assistants with semantic memory, plugin-based web search, file tools, and Python execution.
    0
    0
    What is PersonalAI?
    PersonalAI offers a comprehensive agent framework that combines advanced LLM integrations with persistent semantic memory and an extensible plugin system. Developers can configure memory backends like Redis, SQLite, PostgreSQL, or vector stores to manage embeddings and recall past conversations. Built-in plugins support tasks such as web search, file reading/writing, and Python code execution, while a robust plugin API allows custom tool development. The agent orchestrates LLM prompts and tool invocations in a directed workflow, enabling context-aware responses and automated actions. Use local LLMs via Hugging Face or cloud services via OpenAI and Azure OpenAI. PersonalAI’s modular design facilitates rapid prototyping of domain-specific assistants, automated research bots, or knowledge management agents that learn and adapt over time.
  • An open-source AI agent framework facilitating coordinated multi-agent task orchestration with GPT integration.
    0
    0
    What is MCP Crew AI?
    MCP Crew AI is a developer-focused framework that simplifies the creation and coordination of GPT-based AI agents in collaborative teams. By defining manager, worker, and monitor agent roles, it automates task delegation, execution, and oversight. The package offers built-in support for OpenAI’s API, a modular architecture for custom agent plugins, and a CLI for running and monitoring your Crew. MCP Crew AI accelerates multi-agent system development, making it easier to build scalable, transparent, and maintainable AI-driven workflows.
  • Melissa is an AI-powered personal assistant that manages tasks, automates workflows, and answers queries through natural language chat.
    0
    0
    What is Melissa?
    Melissa operates as a conversational AI agent that uses advanced natural language understanding to interpret user commands, generate context-aware responses, and perform automated tasks. It provides features such as task scheduling, appointment reminders, data lookup, and integration with external APIs like Google Calendar, Slack, and email services. Users can extend Melissa’s capabilities through custom plugins, create workflows for repetitive processes, and access its knowledge base for quick information retrieval. As an open-source project, developers can self-host Melissa on cloud or local servers, configure permissions, and tailor its behavior to suit organizational requirements or personal preferences, making it a flexible solution for productivity, customer support, and digital assistance.
  • An open-source AI agent framework enabling automated planning, tool integration, decision-making, and workflow orchestration with LLMs.
    0
    0
    What is MindForge?
    MindForge is a robust orchestration framework designed for building and deploying AI-driven agents with minimal boilerplate. It offers a modular architecture comprising a task planner, reasoning engine, memory manager, and tool execution layer. By leveraging LLMs, agents can parse user input, formulate plans, and invoke external tools—such as web scraping APIs, databases, or custom scripts—to accomplish complex tasks. Memory components store conversational context, enabling multi-turn interactions, while the decision engine dynamically selects actions based on defined policies. With plugin support and customizable pipelines, developers can extend functionality to include custom tools, third-party integrations, and domain-specific knowledge bases. MindForge simplifies AI agent development, facilitating rapid prototyping and scalable deployment in production environments.
  • Camel is an open-source AI agent orchestration framework enabling multi-agent collaboration, tool integration, and planning with LLMs & knowledge graphs.
    0
    0
    What is Camel AI?
    Camel AI is an open-source framework designed to simplify the creation and orchestration of intelligent agents. It offers abstractions for chaining large language models, integrating external tools and APIs, managing knowledge graphs, and persisting memory. Developers can define multi-agent workflows, decompose tasks into subplans, and monitor execution through a CLI or web UI. Built on Python and Docker, Camel AI allows seamless swapping of LLM providers, custom tool plugins, and hybrid planning strategies, accelerating development of automated assistants, data pipelines, and autonomous workflows at scale.
  • Operit is an open-source AI agent framework offering dynamic tool integration, multi-step reasoning, and customizable plugin-based skill orchestration.
    0
    0
    What is Operit?
    Operit is a comprehensive open-source AI agent framework designed to streamline the creation of autonomous agents for various tasks. By integrating with LLMs like OpenAI’s GPT and local models, it enables dynamic reasoning across multi-step workflows. Users can define custom plugins to handle data fetching, web scraping, database queries, or code execution, while Operit manages session context, memory, and tool invocation. The framework offers a clear API for building, testing, and deploying agents with persistent state, configurable pipelines, and error-handling mechanisms. Whether you’re developing customer support bots, research assistants, or business automation agents, Operit’s extensible architecture and robust tooling ensure rapid prototyping and scalable deployments.
  • A lightweight Python framework to orchestrate LLM-powered agents with tool integration, memory, and customizable action loops.
    0
    0
    What is Python AI Agent?
    Python AI Agent provides a developer-friendly toolkit to orchestrate autonomous agents driven by large language models. It offers built-in mechanisms for defining custom tools and actions, maintaining conversation history with memory modules, and streaming responses for interactive experiences. Users can extend its plugin architecture to integrate APIs, databases, and external services, enabling agents to fetch data, perform computations, and automate workflows. The library supports configurable pipelines, error handling, and logging for robust deployments. With minimal boilerplate, developers can build chatbots, virtual assistants, data analyzers, or task automators that leverage LLM reasoning and multi-step decision making. The open-source nature encourages community contributions and adapts to any Python environment.
  • Saiki is a framework to define, chain, and monitor autonomous AI agents through simple YAML configs and REST APIs.
    0
    0
    What is Saiki?
    Saiki is an open-source agent orchestration framework that empowers developers to build complex AI-driven workflows by writing declarative YAML definitions. Each agent can perform tasks, call external services, or invoke other agents in a chained sequence. Saiki provides a built-in REST API server, execution tracing, detailed log output, and a web-based dashboard for real-time monitoring. It supports retries, fallbacks, and custom extensions, making it easy to iterate, debug, and scale robust automation pipelines.
  • Open-source framework to deploy autonomous AI agents on serverless cloud functions for scalable workflow automation.
    0
    0
    What is Serverless AI Agent?
    Serverless AI Agent simplifies the creation and deployment of autonomous AI agents by leveraging serverless cloud functions. By defining agent behaviors in simple configuration files, developers can enable AI-driven workflows that process natural language input, interact with APIs, execute database queries, and emit events. The framework abstracts infrastructure concerns, automatically scaling agent functions in response to demand. With built-in state persistence, logging, and error handling, Serverless AI Agent supports reliable long-running tasks, scheduled jobs, and event-driven automations. Developers can integrate custom middleware, choose from multiple cloud providers, and extend the agent’s capabilities with plugins for monitoring, authentication, and data storage. This results in rapid prototyping and deployment of robust AI-powered solutions.
  • Open-source framework for building production-ready AI chatbots with customizable memory, vector search, multi-turn dialogue, and plugin support.
    0
    0
    What is Stellar Chat?
    Stellar Chat empowers teams to build conversational AI agents by providing a robust framework that abstracts LLM interactions, memory management, and tool integrations. At its core, it features an extensible pipeline that handles user input preprocessing, context enrichment through vector-based memory retrieval, and LLM invocation with configurable prompting strategies. Developers can plug in popular vector storage solutions like Pinecone, Weaviate, or FAISS, and integrate third-party APIs or custom plugins for tasks like web search, database queries, or enterprise application control. With support for streaming outputs and real-time feedback loops, Stellar Chat ensures responsive user experiences. It also includes starter templates and best-practice examples for customer support bots, knowledge search, and internal workflow automation. Deployed with Docker or Kubernetes, it scales to meet production demands while remaining fully open-source under the MIT license.
  • An open-source autonomous AI agent framework executing tasks, integrating tools like browser and terminal, and memory through human feedback.
    0
    0
    What is SuperPilot?
    SuperPilot is an autonomous AI agent framework that leverages large language models to perform multi-step tasks without manual intervention. By integrating GPT and Anthropic models, it can generate plans, call external tools such as a headless browser for web scraping, a terminal for executing shell commands, and memory modules for context retention. Users define goals, and SuperPilot dynamically orchestrates sub-tasks, maintains a task queue, and adapts to new information. The modular architecture allows adding custom tools, adjusting model settings, and logging interactions. With built-in feedback loops, human input can refine decision-making and improve results. This makes SuperPilot suitable for automating research, coding tasks, testing, and routine data processing workflows.
  • Web-Agent is a browser-based AI agent library enabling automated web interactions, scraping, navigation, and form filling using natural language commands.
    0
    0
    What is Web-Agent?
    Web-Agent is a Node.js library designed to turn natural language instructions into browser operations. It integrates with popular LLM providers (OpenAI, Anthropic, etc.) and controls headless or headful browsers to perform actions like scraping page data, clicking buttons, filling out forms, navigating multi-step workflows, and exporting results. Developers can define agent behaviors in code or JSON, extend via plugins, and chain tasks to build complex automation flows. It simplifies tedious web tasks, testing, and data gathering by letting AI interpret and execute them.
  • An AI Agent platform automating data science workflows by generating code, querying databases, and visualizing data seamlessly.
    0
    0
    What is Cognify?
    Cognify enables users to define data science goals and lets AI Agents handle the heavy lifting. Agents can write and debug code, connect to databases for querying insights, produce interactive visualizations, and even export reports. With a plugin architecture, users can extend functionality to custom APIs, scheduling systems, and cloud services. Cognify offers reproducibility, collaboration features, and logging to track agent decisions and outputs, making it suitable for rapid prototyping and production workflows.
  • An open-source Python framework to build AI-powered Discord chatbots with LLM support, plugin integration, and memory management.
    0
    0
    What is Discord AI Agent?
    Discord AI Agent leverages the Discord API and OpenAI-compatible LLMs to transform any server into an interactive AI chat environment. Developers can register custom plugins to handle slash commands, message events, or scheduled tasks, while built-in memory storage retains conversation context for coherent multi-turn dialogues. The framework supports asynchronous execution, configurable models, prompt templates, and logging for debugging. By editing a single YAML or JSON configuration, you can define API keys, model preferences, command prefixes, and plugin directories. Its extension-friendly architecture allows adding specialized functionality such as moderation, trivia games, or customer support bots. Whether running locally or deploying on cloud platforms, Discord AI Agent simplifies the process of building flexible, maintainable AI agents for community engagement.
  • Hyperbolic Time Chamber enables developers to build modular AI agents with advanced memory management, prompt chaining, and custom tool integration.
    0
    0
    What is Hyperbolic Time Chamber?
    Hyperbolic Time Chamber provides a flexible environment for constructing AI agents by offering components for memory management, context window orchestration, prompt chaining, tool integration, and execution control. Developers define agent behaviors via modular building blocks, configure custom memories (short- and long-term), and link external APIs or local tools. The framework includes async support, logging, and debugging utilities, enabling rapid iteration and deployment of sophisticated conversational or task-oriented agents in Python projects.
Featured