Differentiable artificial reverberation has the potential to address a wide range of audio machine-learning tasks, including style transfer, blind estimation, and speech enhancement. This research area has grown rapidly, with many new approaches proposed over the past few years, particularly within the field of differentiable digital signal processing. As a result, numerous differentiable reverb architectures have emerged. At the same time, these developments highlight the need for loss functions that properly capture the perceptually important time- and frequency-domain characteristics of reverberation.
In this talk, we will review key results from recent literature with a focus on architectures suitable for real-time applications. Specifically, we will discuss different architecture choices, optimization strategies, and practical insights for designing loss functions tailored to reverberation. We will also explore how standard, off-the-shelf loss functions can be adapted to better handle reverb and reverberated signals. We will conclude with a forward-looking perspective, highlighting current challenges and open research questions, as well as spatial audio applications.