Musical-Agent-Systems offers a modular architecture where each musical agent encapsulates behavior models, event schedulers, and synthesis controllers. Users define agents via configuration files or code, specifying generative algorithms, response triggers, and communication protocols for ensemble coordination. The system supports real-time performance through efficient scheduling, enabling dynamic adaptation to external inputs or other agents' outputs. It includes core modules for pattern generation, machine learning–based style modeling, and MIDI/Open Sound Control (OSC) integration. With extensible plugin support, developers can add custom synthesis engines, analysis tools, or AI models. Ideal for academic research, interactive installations, and live algorithmic performances, the framework bridges computational creativity and practical music-making workflows.