Ultimate custom APIs Solutions for Everyone

Discover all-in-one custom APIs tools that adapt to your needs. Reach new heights of productivity with ease.

custom APIs

  • An open-source Python framework to build LLM-driven agents with memory, tool integration, and multi-step task planning.
    0
    0
    What is LLM-Agent?
    LLM-Agent is a lightweight, extensible framework for building AI agents powered by large language models. It provides abstractions for conversation memory, dynamic prompt templates, and seamless integration of custom tools or APIs. Developers can orchestrate multi-step reasoning processes, maintain state across interactions, and automate complex tasks such as data retrieval, report generation, and decision support. By combining memory management with tool usage and planning, LLM-Agent streamlines the development of intelligent, task-oriented agents in Python.
  • A framework to run local large language models with function calling support for offline AI agent development.
    0
    0
    What is Local LLM with Function Calling?
    Local LLM with Function Calling allows developers to create AI agents that run entirely on local hardware, eliminating data privacy concerns and cloud dependencies. The framework includes sample code for integrating local LLMs such as LLaMA, GPT4All, or other open-weight models, and demonstrates how to configure function schemas that the model can invoke to perform tasks like fetching data, executing shell commands, or interacting with APIs. Users can extend the design by defining custom function endpoints, customizing prompts, and handling function responses. This lightweight solution simplifies the process of building offline AI assistants, chatbots, and automation tools for a wide range of applications.
  • An open-source AI agent framework to build, orchestrate, and deploy intelligent agents with tool integrations and memory management.
    0
    0
    What is Wren?
    Wren is a Python-based AI agent framework designed to help developers create, manage, and deploy autonomous agents. It provides abstractions for defining tools (APIs or functions), memory stores for context retention, and orchestration logic to handle multi-step reasoning. With Wren, you can rapidly prototype chatbots, task automation scripts, and research assistants by composing LLM calls, registering custom tools, and persisting conversation history. Its modular design and callback capabilities make it easy to extend and integrate with existing applications.
  • Kin Kernel is a modular AI agent framework enabling automated workflows through LLM orchestration, memory management, and tool integrations.
    0
    0
    What is Kin Kernel?
    Kin Kernel is a lightweight, open-source kernel framework for constructing AI-powered digital workers. It provides a unified system for orchestrating large language models, managing contextual memory, and integrating custom tools or APIs. With an event-driven architecture, Kin Kernel supports asynchronous task execution, session tracking, and extensible plugins. Developers define agent behaviors, register external functions, and configure multi-LLM routing to automate workflows ranging from data extraction to customer support. The framework also includes built-in logging and error handling to facilitate monitoring and debugging. Designed for flexibility, Kin Kernel can be integrated into web services, microservices, or standalone Python applications, enabling organizations to deploy robust AI agents at scale.
  • A comprehensive B2B billing and revenue management platform for modern finance teams.
    0
    0
    What is Received?
    Received is a next-generation platform for B2B finance teams, facilitating the management of custom contracts and complex pricing models. It offers automated invoicing, contract management, usage-based invoicing, and custom APIs. By centralizing revenue streams and providing real-time data insights, the platform allows businesses to streamline billing processes, reduce late payments, and maintain a healthy cash flow. It aims to replace traditional spreadsheets and eliminate IT overhead, creating a seamless, automated financial environment.
  • An AI Agent framework enabling multiple autonomous agents to self-coordinate and collaborate on complex tasks using conversational workflows.
    0
    0
    What is Self Collab AI?
    Self Collab AI provides a modular framework where developers define autonomous agents, communication channels, and task objectives. Agents use predefined prompts and patterns to negotiate responsibilities, exchange data, and iterate on solutions. Built on Python and easy-to-extend interfaces, it supports integration with LLMs, custom plugins, and external APIs. Teams can rapidly prototype complex workflows—such as research assistants, content generation, or data analysis pipelines—by configuring agent roles and collaboration rules without deep orchestration code.
  • SimplerLLM is a lightweight Python framework for building and deploying customizable AI agents using modular LLM chains.
    0
    0
    What is SimplerLLM?
    SimplerLLM provides developers a minimalistic API to compose LLM chains, define agent actions, and orchestrate tool calls. With built-in abstractions for memory retention, prompt templates, and output parsing, users can rapidly assemble conversational agents that maintain context across interactions. The framework seamlessly integrates with OpenAI, Azure, and HuggingFace models, and supports pluggable toolkits for searches, calculators, and custom APIs. Its lightweight core minimizes dependencies, allowing agile development and easy deployment on cloud or edge. Whether building chatbots, QA assistants, or task automators, SimplerLLM simplifies end-to-end LLM agent pipelines.
  • AI Agents is a Python framework for building modular AI agents with customizable tools, memory, and LLM integration.
    0
    0
    What is AI Agents?
    AI Agents is a comprehensive Python framework designed to streamline the development of intelligent software agents. It offers plug-and-play toolkits for integrating external services such as web search, file I/O, and custom APIs. With built-in memory modules, agents maintain context across interactions, enabling advanced multi-step reasoning and persistent conversations. The framework supports multiple LLM providers, including OpenAI and open-source models, allowing developers to switch or combine models easily. Users define tasks, assign tools and memory policies, and the core engine orchestrates prompt construction, tool invocation, and response parsing for seamless agent operation.
  • Agent-Baba enables developers to create autonomous AI agents with customizable plugins, conversational memory, and automated task workflows.
    0
    0
    What is Agent-Baba?
    Agent-Baba provides a comprehensive toolkit for creating and managing autonomous AI agents tailored to specific tasks. It offers a plugin architecture for extending capabilities, a memory system to retain conversational context, and workflow automation for sequential task execution. Developers can integrate tools like web scrapers, databases, and custom APIs into agents. The framework simplifies configuration through declarative YAML or JSON schemas, supports multi-agent collaboration, and provides monitoring dashboards to track agent performance and logs, enabling iterative improvement and seamless deployment across environments.
  • Agent-Squad coordinates multiple specialized AI agents to decompose tasks, orchestrate workflows, and integrate tools for complex problem solving.
    0
    0
    What is Agent-Squad?
    Agent-Squad is a modular Python framework that empowers teams to design, deploy, and run multi-agent systems for complex task execution. At its core, Agent-Squad lets users configure diverse agent profiles—such as data retrievers, summarizers, coders, and validators—that communicate through defined channels and share memory contexts. By decomposing high-level objectives into subtasks, the framework orchestrates parallel processing and leverages LLMs alongside external APIs, databases, or custom tools. Developers can specify workflows in JSON or code, monitor agent interactions, and adapt strategies dynamically using built-in logging and evaluation utilities. Common applications include automated research assistants, content generation pipelines, intelligent QA bots, and iterative code review processes. The open-source design integrates seamlessly with AWS services, enabling scalable deployments.
  • Fenado AI helps founders launch their apps and websites without needing a tech team.
    0
    2
    What is Cades?
    Fenado AI, created by experienced founders Azhar Iqubal and Manish Bisht, offers a no-code platform for launching websites and mobile apps. The platform leverages AI to help users design and build their digital products swiftly, from idea to execution, without any programming knowledge. Fenado AI's core services include instant prototypes, AI-powered creation, and scalable solutions for comprehensive business needs. Whether it's creating functional mobile apps, developing custom APIs, or providing dedicated tech support, Fenado AI simplifies the process for founders, enabling them to turn their visions into reality quickly and efficiently.
  • InfantAgent is a Python framework for rapidly building intelligent AI agents with pluggable memory, tools, and LLM support.
    0
    0
    What is InfantAgent?
    InfantAgent offers a lightweight structure for designing and deploying intelligent agents in Python. It integrates with popular LLMs (OpenAI, Hugging Face), supports persistent memory modules, and enables custom tool chains. Out of the box, you get a conversational interface, task orchestration, and policy-driven decision making. The framework’s plugin architecture allows easy extension for domain-specific tools and APIs, making it ideal for prototyping research agents, automating workflows, or embedding AI assistants into applications.
Featured