Music technology has enabled artists and engineers to create highly interactive musical systems in many different forms of media, from games, to art installations and instruments. In this talk I share ideas on ways to approach technology and tools such as data, sensors and hardware, MIDI or OSC, and programming languages, in order to create systems that are intuitive to engage by the audience across media that involves sound.
Despite Rust's benefits, it has seen limited adoption due to factors including existing ecosystems built for other languages (JUCE, VST3 SDK), less overall time to grow, and the general appeal of familiarity.
This talk showcases CSick, a scaffolding system for automated FFI generation designed to provide a bridge, not a ferry, between C++ and Rust.
Previous solutions have focused on addressing developers' reluctance by easing the transition with familiar workflows adapted for Rust development (e.g. cxx-juce), with Rust as the forerunner and C++ only when necessary.
By contrast, CSick is built to allow gradual adoption—integrating Rust code into existing C++ codebases one chunk at a time—allowing developers to reap Rust’s benefits without entirely jumping ship.
Our company develops a singing voice synthesis application for PC, and based on that architecture, we undertook implementation as an AUv3 plugin for iOS. However, requirements specific to singing voice synthesis—such as UI design premised on lyrics input, handling of large-scale models, and high initialization costs—are closely related to AUv3's execution model, sandbox constraints, extension launch restrictions, and iOS memory management characteristics.
In this presentation, we will organize the technical constraints we encountered in the process of realizing a singing voice synthesis application as an AUv3 plugin, and share perspectives that should be prerequisites when designing AUv3 in a mobile environment.
In composition and arrangement using existing DAWs, users set appropriate timbres for each track from vast timbral datasets classified by category (such as instruments and sound source names). We reconsider this current text-based timbral search interaction itself and propose a new approach to expand creativity across diverse timbres. We have removed the conventional concept of timbral categories and have: 1) calculated relationships between timbres depending only on acoustic features, and 2) constructed an interface that enables visual confirmation of relationships between timbres. By visualizing similarity between timbres across categories, we provide serendipitous timbral exploration not constrained by conventional timbral categories. In this presentation, we will discuss the background of the proposed approach, technical overview, and usefulness based on user testing.
mimium (https://mimium.org) is a functional programming language designed for audio processing with syntax similar to Rust. It runs on both native and web platforms, and allows oscillators and signal processing to be defined from a very low level. It also features a proprietary live coding capability based on differential analysis of source code, enabling hot-swapping of signal processing code without resetting the internal state of the audio. This presentation will explain the details of its design and implementation.