AppAgent is an LLM-based multimodal agent framework designed to operate smartphone applications without manual scripting. It integrates screen capture, GUI element detection, OCR parsing, and natural language planning to understand app layouts and user intents. The framework issues touch events (tap, swipe, text input) through an Android device or emulator to automate workflows. Researchers and developers can customize prompts, configure LLM APIs, and extend modules to support new apps and tasks, achieving adaptive and scalable mobile automation.