Advanced Despliegue en Docker Tools for Professionals

Discover cutting-edge Despliegue en Docker tools built for intricate workflows. Perfect for experienced users and complex projects.

Despliegue en Docker

  • A web platform to discover, explore, and deploy diverse AI agents with searchable categories in one unified marketplace.
    0
    0
    What is AI Agent Marketplace?
    AI Agent Marketplace is built with Next.js and React to provide a centralized hub where users can browse, evaluate, and deploy a wide range of AI agents. The platform pulls agent metadata from community contributions, offering detailed descriptions, capability tags, and live in-browser demos. Users can filter agents by domain, function, or technology provider. For developers, the open-source repository includes a modular architecture with support for adding new agent entries, configuring API endpoints, and customizing UI components. Deployment options include hosting on Vercel or local Docker containers. By consolidating disparate AI agent projects into one searchable interface, the marketplace accelerates experimentation, collaboration, and integration into production workflows.
  • An open-source AI engine generating engaging 30-second videos from text prompts using text-to-video, TTS, and editing.
    0
    0
    What is AI Short Video Engine?
    AI-Short-Video-Engine orchestrates multiple AI modules in an end-to-end pipeline to transform user-defined text prompts into polished short videos. First, the system leverages large language models to generate a storyboard and script. Next, Stable Diffusion creates scene artwork, while bark provides realistic voice narration. The engine assembles images, text overlays, and audio into a cohesive video, adding transitions and background music automatically. Its plugin-based architecture allows customization of each stage: from swapping in alternative text-to-image or TTS models to adjusting video resolution and style templates. Deployed via Docker or native Python, it offers both CLI commands and RESTful API endpoints, enabling developers to integrate AI-driven video production into existing workflows seamlessly.
  • Aladin is an open-source autonomous LLM agent enabling scripted workflows, memory-enabled decision-making, and plugin-based task orchestration.
    0
    0
    What is Aladin?
    Aladin provides a modular architecture that allows developers to define autonomous agents powered by large language models (LLMs). Each agent can load memory backends (e.g., SQLite, in-memory), utilize dynamic prompt templates, and integrate custom plugins for external API calls or local command execution. It features a task planner that breaks high-level goals into sequenced actions, executing them in order and iterating based on LLM feedback. Configuration is managed through YAML files and environment variables, making it adaptable to various use cases. Users can deploy Aladin via Docker Compose or pip installation. The CLI and FastAPI-based HTTP endpoints let users trigger agents, monitor execution, and inspect memory states, facilitating integration with CI/CD pipelines, chat interfaces, or custom dashboards.
  • Integrate AI models easily with no machine learning knowledge.
    0
    0
    What is Cargoship?
    Cargoship provides a streamlined solution for integrating AI into your applications without requiring any machine learning expertise. Select from our collection of open-source AI models, packaged conveniently in Docker containers. By running the container, you can effortlessly deploy the models and access them via a well-documented API. This makes it easier for developers at any skill level to incorporate sophisticated AI capabilities into their software, thus speeding up development time and reducing complexity.
  • ClassiCore-Public automates ML classification, offering data preprocessing, model selection, hyperparameter tuning, and scalable API deployment.
    0
    0
    What is ClassiCore-Public?
    ClassiCore-Public provides a comprehensive environment for building, optimizing, and deploying classification models. It features an intuitive pipeline builder that handles raw data ingestion, cleaning, and feature engineering. The built-in model zoo includes algorithms like Random Forests, SVMs, and deep learning architectures. Automated hyperparameter tuning uses Bayesian optimization to find optimal settings. Trained models can be deployed as RESTful APIs or microservices, with monitoring dashboards tracking performance metrics in real time. Extensible plugins let developers add custom preprocessing, visualization, or new deployment targets, making ClassiCore-Public ideal for industrial-scale classification tasks.
  • OmniMind0 is an open-source Python framework enabling autonomous multi-agent workflows with built-in memory management and plugin integration.
    0
    0
    What is OmniMind0?
    OmniMind0 is a comprehensive agent-based AI framework written in Python that allows creation and orchestration of multiple autonomous agents. Each agent can be configured to handle specific tasks—such as data retrieval, summarization, or decision-making—while sharing state through pluggable memory backends like Redis or JSON files. The built-in plugin architecture lets you extend functionality with external APIs or custom commands. It supports OpenAI, Azure, and Hugging Face models, and offers deployment via CLI, REST API server, or Docker for flexible integration into your workflows.
  • Open-source framework for building production-ready AI chatbots with customizable memory, vector search, multi-turn dialogue, and plugin support.
    0
    0
    What is Stellar Chat?
    Stellar Chat empowers teams to build conversational AI agents by providing a robust framework that abstracts LLM interactions, memory management, and tool integrations. At its core, it features an extensible pipeline that handles user input preprocessing, context enrichment through vector-based memory retrieval, and LLM invocation with configurable prompting strategies. Developers can plug in popular vector storage solutions like Pinecone, Weaviate, or FAISS, and integrate third-party APIs or custom plugins for tasks like web search, database queries, or enterprise application control. With support for streaming outputs and real-time feedback loops, Stellar Chat ensures responsive user experiences. It also includes starter templates and best-practice examples for customer support bots, knowledge search, and internal workflow automation. Deployed with Docker or Kubernetes, it scales to meet production demands while remaining fully open-source under the MIT license.
  • A modular FastAPI backend enabling automated document data extraction and parsing using Google Document AI and OCR.
    0
    0
    What is DocumentAI-Backend?
    DocumentAI-Backend is a lightweight backend framework that automates extraction of text, form fields, and structured data from documents. It offers REST API endpoints for uploading PDFs or images, processes them via Google Document AI with OCR fallback, and returns parsed results in JSON. Built with Python, FastAPI, and Docker, it enables quick integration into existing systems, scalable deployments, and customization through configurable pipelines and middleware.
  • Sys-Agent is a self-hosted AI-driven personal assistant enabling CLI command execution, file management, and system monitoring via natural language.
    0
    0
    What is Sys-Agent?
    Sys-Agent provides a secure, self-hosted environment where users issue natural language instructions to perform system-level tasks. It connects with AI backends like OpenAI, local LLMs or other model services, translating prompts into shell commands, file operations, and infrastructure checks. Users can customize prompts, define task templates, scale through Docker or Kubernetes, and extend functionality via plugins. Sys-Agent logs all actions and offers audit trails to ensure transparency and security.
Featured