Ultimate 커뮤니티 주도 Solutions for Everyone

Discover all-in-one 커뮤니티 주도 tools that adapt to your needs. Reach new heights of productivity with ease.

커뮤니티 주도

  • An open-source framework enabling modular LLM-powered agents with integrated toolkits and multi-agent coordination.
    0
    0
    What is Agents with ADK?
    Agents with ADK is an open-source Python framework designed to streamline the creation of intelligent agents powered by large language models. It includes modular agent templates, built-in memory management, tool execution interfaces, and multi-agent coordination capabilities. Developers can quickly plug in custom functions or external APIs, configure planning and reasoning chains, and monitor agent interactions. The framework supports integration with popular LLM providers and provides logging, retry logic, and extensibility for production deployments.
  • An open web platform to discover, filter, and contribute AI agents with detailed listings and community submissions.
    0
    0
    What is AI Agent Marketplace?
    AI Agent Marketplace is a community-driven directory for AI agents, allowing developers, researchers, and enthusiasts to discover, evaluate, and contribute agents. Users can filter agents by category, view detailed functionality and integration instructions, and submit their own agents via pull requests. The platform aggregates metadata, links, and examples for each agent, making it easier to compare capabilities and find the right tool for specific use cases.
  • Lagent is an open-source AI agent framework for orchestrating LLM-powered planning, tool use, and multi-step task automation.
    0
    0
    What is Lagent?
    Lagent is a developer-focused framework that enables creation of intelligent agents on top of large language models. It offers dynamic planning modules that break tasks into subgoals, memory stores to maintain context over long sessions, and tool integration interfaces for API calls or external service access. With customizable pipelines, users define agent behaviors, prompting strategies, error handling, and output parsing. Lagent’s logging and debugging tools help monitor decision steps, while its scalable architecture supports local, cloud, or enterprise deployments. It accelerates building autonomous assistants, data analysers, and workflow automations.
  • A community-driven library of prompts for testing new LLMs
    0
    0
    What is PromptsLabs?
    PromptsLabs is a platform where users can discover and share prompts to test new language models. The community-driven library provides a wide range of copy-paste prompts along with their expected outputs, helping users to understand and evaluate the performance of various LLMs. Users can also contribute their own prompts, ensuring a continually growing and up-to-date resource.
  • A Pythonic framework implementing the Model Context Protocol to build and run AI agent servers with custom tools.
    0
    0
    What is FastMCP?
    FastMCP is an open-source Python framework for building MCP (Model Context Protocol) servers and clients that empower LLMs with external tools, data sources, and custom prompts. Developers define tool classes and resource handlers in Python, register them with the FastMCP server, and deploy using transport protocols like HTTP, STDIO, or SSE. The framework’s client library offers an asynchronous interface for interacting with any MCP server, facilitating seamless integration of AI agents into applications.
  • pyafai is a Python modular framework to build, train, and run autonomous AI agents with plug-in memory and tool support.
    0
    0
    What is pyafai?
    pyafai is an open-source Python library designed to help developers architect, configure, and execute autonomous AI agents. It offers pluggable modules for memory management to retain context, tool integration for external API calls, observers for environment monitoring, planners for decision making, and an orchestrator to run agent loops. Logging and monitoring features provide visibility into agent performance and behavior. pyafai supports major LLM providers out of the box, enables custom module creation, and reduces boilerplate so teams can rapidly prototype virtual assistants, research bots, and automation workflows with full control over each component.
Featured