Music technology has enabled artists and engineers to create highly interactive musical systems in many different forms of media, from games, to art installations and instruments. In this talk I share ideas on ways to approach technology and tools such as data, sensors and hardware, MIDI or OSC, and programming languages, in order to create systems that are intuitive to engage by the audience across media that involves sound.
The convergence of AI and cloud computing is revolutionizing audio development. This session explores how AWS cloud services enable audio developers to build scalable, AI-powered applications for speech recognition, audio synthesis, real-time streaming, and generative audio content.
We'll demonstrate practical architectures for:
Real-time audio processing with AI-based dubbing and translation using AWS Media Services Speech synthesis and recognition using Amazon Polly, Transcribe, and generative AI models Scalable audio streaming architectures with Amazon Kinesis and serverless computing Building audio ML models with Amazon SageMaker and deploying them at scale Sentiment analysis from audio data using AWS generative AI services Attendees will learn cloud-native patterns for audio development, including containerization with Kubernetes, event-driven architectures, and GPU-optimized infrastructure for AI workloads.
Vishal Alhat is a Developer Advocate at Amazon Web Services (AWS) and a former AWS Hero, recognized for his significant contributions to the AWS community. With 11+ years of experience in cloud technologies, Vishal specializes in DevOps, cloud security, and AI/ML.As an active community... Read More →
Wednesday June 3, 2026 9:00am - 10:00am JST Next 1
In recent years, generative models have become capable of generating high-quality music from natural language. However, the mechanisms to adequately respond to repeated trial-and-error and fine-grained nuance adjustments that occur throughout the production process remain in a developmental stage.
This presentation introduces design approaches based on interactive machine learning, where users can leverage small amounts of local data generated during the production process and manipulate the latent space of generative models. By incorporating exploration and parameter manipulation into an interactive loop, we present a structure that allows generative model outputs to be not merely "selected," but rather integrated into and utilized within one's own production process.
Through research case studies from the presenter, we will introduce visualization of generative models, real-time control, applications to live performance, and design examples as audio plugins and tools. We will discuss new practical approaches for how music generation AI can be integrated into workflows for composition, arrangement, and sound design.
In an era of generative automation, the traditional boundary between artist and audience is dissolving. This session explores the transition of the human voice from a static recording to a dynamic, professional instrument. Drawing on my experience as a Billboard-charting frontman and MBA strategist, I will demonstrate how vocal synthesis—specifically the development of the HXVOC voicebank—enables creators to bypass the 'cold wall' of the algorithm. We will discuss the ethical shift from mass-consumption to distributed authorship, showing that technology will not replace the performer, but empower a global community to build its own legacy.
Despite Rust's benefits, it has seen limited adoption due to factors including existing ecosystems built for other languages (JUCE, VST3 SDK), less overall time to grow, and the general appeal of familiarity.
This talk showcases CSick, a scaffolding system for automated FFI generation designed to provide a bridge, not a ferry, between C++ and Rust.
Previous solutions have focused on addressing developers' reluctance by easing the transition with familiar workflows adapted for Rust development (e.g. cxx-juce), with Rust as the forerunner and C++ only when necessary.
By contrast, CSick is built to allow gradual adoption—integrating Rust code into existing C++ codebases one chunk at a time—allowing developers to reap Rust’s benefits without entirely jumping ship.
How close is close enough when modeling analog ladder filters on a $1-class microcontroller?
Microcontrollers operate under strict computational constraints that fundamentally differ from desktop virtual analog environments. When implementing Moog-style ladder filters in such systems, defining and evaluating analog-likeness becomes a practical engineering challenge.
This work is motivated by the development of a virtual analog synthesizer running on an RP2350 microcontroller, designed to deliver a convincing analog feel. Practical evaluation metrics are consolidated, including resonance peak alignment, Q consistency, normalized harmonic spectra, level-dependent cutoff shift, and self-oscillation behavior. The talk also discusses how these metrics can be meaningfully and reproducibly measured.
Ladder filter implementations drawn from published algorithms, open-source audio libraries such as Teensy Audio Library, DaisySP, and JUCE, are ported to the RP2350 and evaluated against a SPICE circuit simulation serving as a reproducible analog reference. Measurement results are presented to examine how closely each implementation can approach analog behavior under strict hardware constraints.
Differentiable artificial reverberation has the potential to address a wide range of audio machine-learning tasks, including style transfer, blind estimation, and speech enhancement. This research area has grown rapidly, with many new approaches proposed over the past few years, particularly within the field of differentiable digital signal processing. As a result, numerous differentiable reverb architectures have emerged. At the same time, these developments highlight the need for loss functions that properly capture the perceptually important time- and frequency-domain characteristics of reverberation.
In this talk, we will review key results from recent literature with a focus on architectures suitable for real-time applications. Specifically, we will discuss different architecture choices, optimization strategies, and practical insights for designing loss functions tailored to reverberation. We will also explore how standard, off-the-shelf loss functions can be adapted to better handle reverb and reverberated signals. We will conclude with a forward-looking perspective, highlighting current challenges and open research questions, as well as spatial audio applications.
Modern audio plug-in development still pays a steep portability tax: separate UI/DSP stacks per OS and host, repeated rebuilds, and validation that’s hard to automate. This talk introduces a “write once, run everywhere” approach that treats Chromium as a compatibility runtime and tunnels its audio graph directly into a DAW plug-in host (VST)—allowing a single codebase for both UI and DSP to run across environments.
Beyond portability, the runtime enables a DevOps-style workflow for audio: externally controlled timing, deterministic offline rendering, and CI-friendly regression testing for AudioWorklet-style processing. We’ll present a working proof-of-concept, outline the key architectural choices and trade-offs, and show how this foundation can unlock faster iteration at ecosystem scale—especially as automated and AI-assisted development becomes the norm.
移植性を超えて、このランタイムはオーディオに対する DevOps スタイルのワークフローを実現します。外部から制御されるタイミング、確定的なオフラインレンダリング、AudioWorklet スタイルの処理に対する CI フレンドリーな回帰テストです。実装可能な概念実証を提示し、主要なアーキテクチャの選択とトレードオフについて説明し、この基盤がどのようにしてエコシステム規模での迅速な反復を実現できるかを示します。特に自動化と AI 支援開発が標準化されるにつれて、その重要性が高まります。
Our company develops a singing voice synthesis application for PC, and based on that architecture, we undertook implementation as an AUv3 plugin for iOS. However, requirements specific to singing voice synthesis—such as UI design premised on lyrics input, handling of large-scale models, and high initialization costs—are closely related to AUv3's execution model, sandbox constraints, extension launch restrictions, and iOS memory management characteristics.
In this presentation, we will organize the technical constraints we encountered in the process of realizing a singing voice synthesis application as an AUv3 plugin, and share perspectives that should be prerequisites when designing AUv3 in a mobile environment.
When we build cross-platform music apps and plugins, they are mostly desktop and sometimes including iOS, but much less happens on Android. Since Android audio latency has improved a lot by 2026, we have to tackle the next problem: we are missing audio plugin formats on Android. Apple has good ecosystem, so why not designing one for Android?
But you would wonder, why can't we just simply take VST3, CLAP, or LV2 on Android? Because, it is not that simple. We have a lot of lessons learned (or, learning) from Apple AudioUnit V3, along with their efforts on Logic Pro.
Throughout this session we will explain what is tricky to achieve audio plugin functionality on Android through past accomplishments, and how to deal with it. There are many issues such as, publishing audio plugin products from diverse plugin vendors without being tied to a specific DAW, passing audio and event data between a DAW and a plugin, showing plugin GUI on a DAW, and so on. We also discuss what's missing on Android platform itself to achieve full realtime capability within our apps, not just their own framework.
There are many trends on audio plugin development such as MIDI 2.0 integration like (upcoming next-gen. JUCE AudioProcessor), CLAP-first development, AI-capability such as MCP integration. We discuss what kind of features a plugin format should and should NOT tackle, especially taking CLAP as a reference. You would also learn why JUCE cannot be a "format" here.
At last, designing a plugin format is just a milestone and not the goal. We also have to achieve a plugin "ecosystem", which is very often understood as chicken and egg problem. We would discuss this with some existing efforts.
NKIDO is a live-coding audio environment built from scratch: a Tidal-inspired pattern language, a zero-allocation C++20 bytecode VM with 95+ DSP opcodes, and a browser IDE running it all via WebAssembly. This talk covers the language design, the runtime internals, and what it's like to vibe-code 60,000 lines of real-time audio C++ with AI.
In composition and arrangement using existing DAWs, users set appropriate timbres for each track from vast timbral datasets classified by category (such as instruments and sound source names). We reconsider this current text-based timbral search interaction itself and propose a new approach to expand creativity across diverse timbres. We have removed the conventional concept of timbral categories and have: 1) calculated relationships between timbres depending only on acoustic features, and 2) constructed an interface that enables visual confirmation of relationships between timbres. By visualizing similarity between timbres across categories, we provide serendipitous timbral exploration not constrained by conventional timbral categories. In this presentation, we will discuss the background of the proposed approach, technical overview, and usefulness based on user testing.
Building cross-platform audio apps is difficult - and for a long time, Android lagged far behind iOS when it came to music-making tools. That's changing. Elementary Audio introduces a new paradigm for audio experiences: by exposing a shared JS API with both web and native renderers, it makes code reuse across platforms feel natural. In this talk, I'll introduce Elementary Audio, walk through react-native-elementary, and demo what's possible to build with it today - including how AI is removing what little friction remains.
Software engineer and music producer based in London. Building Midicircuit at Yonko Level — an interactive app for learning music production — and releasing beats as TXBROWN. Interested in audio engineering, learning UX, and making music technology accessible to everyone.
The Head-Related Transfer Function (HRTF) is a key technology for three-dimensional binaural audio rendering. However, issues regarding audio quality and HRTF personalization must be resolved for this technology to be adopted more widely. When HRTFs are applied to music production, audio quality may become problematic. Additionally, since HRTFs exhibit significant individual variation, personalized HRTFs—that is, HRTFs measured or customized for each user—are desirable, but cost becomes an issue. Therefore, for widespread adoption of HRTFs, a typical HRTF that provides consistent effectiveness for everyone is needed.
The speaker proposes using Generalized HRTF (GHRTF) based on machine learning as a solution to these problems. This presentation first outlines the fundamentals and challenges of HRTFs and binaural rendering. Then it presents the definition of GHRTFs that achieve high audio quality, along with estimation methods based on machine learning and their results. Next, the presentation demonstrates a learning method for Typical GHRTFs based on data from numerous subjects and provides estimation examples. Finally, the presentation describes its application to SoundObject, an object-based three-dimensional spatial audio VST 3 plug-in that the speaker has made freely available to the public. The presentation concludes that this approach yields clearer directionality and higher audio quality compared to conventional dummy head HRTFs.
The presentation materials are in both English and Japanese.
頭部伝達関数 (Head-Related Transfer Function: HRTF) はバイノーラル再生による立体音響のキーテクノロジーです.しかし,この技術の普及には音質と頭部伝達関数の個人化の問題を解決する必要があります.頭部伝達関数を音楽制作に適用した場合,音質が問題となる場合があります.また,頭部伝達関数は個人差が大きいため,頭部伝達関数の個人化,即ち利用者毎に計測ないしカスタマイズした頭部伝達関数の使用が望ましいが,コストが問題となります.従って,頭部伝達関数の普及には,誰でも一定の効果が得られる典型的な頭部伝達関数が必要となります.
Areas of expertise: analog and digital signal processing, circuit design, computer architecture, low-level programming, and UNIX kernel. 得意分野は,アナログおよびディジタル信号処理,回路設計,コンピュータアーキテクチャ,低レベルプログラミング,UNIX... Read More →
mimium (https://mimium.org) is a functional programming language designed for audio processing with syntax similar to Rust. It runs on both native and web platforms, and allows oscillators and signal processing to be defined from a very low level. It also features a proprietary live coding capability based on differential analysis of source code, enabling hot-swapping of signal processing code without resetting the internal state of the audio. This presentation will explain the details of its design and implementation.
Real-time convolution reverb is well understood, but continuously synthesizing long, spatial impulse responses (IRs) at runtime remains a significant engineering and perceptual challenge. This session presents a hybrid GPU/CPU acoustics pipeline that synthesizes listener-centric IRs in real time using multi-bounce raytracing. The pipeline is currently integrated into Elemental Games’ proprietary engine for its unannounced open-world debut title.
The system models frequency-dependent absorption, geometric propagation, and spatial encoding using Ambisonics, while balancing physical plausibility with perceptual clarity. Beyond straightforward multi-bounce tracing, the implementation explores performance-aware sampling strategies and hybrid visibility heuristics to better capture the contrast between enclosed and open spaces. Adaptive update strategies dynamically adjust IR refresh rates based on listener motion and scene changes, maintaining perceptual stability while respecting GPU budgets.
IR data is prepared using partitioned FFT processing on the GPU and transferred to the audio thread through a wait-free synchronization model, enabling stable time-varying convolution without blocking real-time audio processing. Particular focus is given to artifact-free IR updates under evolving conditions, including hybrid time- and frequency-domain crossfading techniques.
The talk examines architectural decisions, modeling trade-offs, perceptual post-processing techniques such as diffusion and stochastic smoothing, and the practical constraints of integrating real-time acoustic synthesis into a production engine. Attendees will gain insight into designing hybrid GPU/CPU DSP pipelines that balance physical modeling, runtime performance, and creative control.
Anton Lundberg is a software engineer and audio programmer specializing in high-performance real-time audio systems and game engine architecture. He develops next-generation game audio middleware at elias.audio and leads development of the audio technology stack at Elemental Games... Read More →
Hatsune Miku has evolved beyond a mere sound source into a "singing voice synthesizer" equipped with advanced expressiveness and real-time responsiveness. This session explains the core technologies of real-time singing voice synthesis developed to meet these requirements, focusing on the architectural shift from conventional subtractive synthesis-based singing synthesis methods to additive synthesis-based approaches.
We delve into fundamental technical challenges in singing voice synthesis: "balancing computational cost with the fidelity of spectral reconstruction" and "ensuring precise controllability without compromising naturalness." In particular, we detail why the additive synthesis architecture was adopted, and discuss the advantages and trade-offs in time-series fidelity and spectral manipulation flexibility compared to other methods such as subtractive synthesis.
Additionally, as optimization strategies for maintaining real-time performance in general consumer environments, we address parameter compression concepts and computational load management techniques. Finally, we share future perspectives including SDK-oriented design to support next-generation creativity and engine extensibility.