What happens when you give an LLM the ability to not just control a synth's parameters, but understand its signal flow and hear the results? In this talk, I'll explore that question live on stage — starting with a basic synth sound, describing what I want in plain language, and iteratively shaping it into a fully designed lead patch. Along the way, we'll see where an LLM can genuinely reason about sound design versus where it falls short, and what kind of information — from signal flow metadata to real-time spectral analysis — makes the difference between an AI that blindly turns knobs and one that can diagnose and fix a problem.