Music Production, Processing and Analysis Group

Research that contributes to new techniques and technologies for interacting with sound as well as those that respond to such developments in the studio and understand them from a human and cultural perspective.

Reverberation modelling

The first working prototype of a new reverberation modelling system is currently being tested in the Music Production, Processing and Analysis Group.

Modern digital reverberation systems are typically either convolution-based or algorithmic. The former type works with samples of reverberation (usually in the form of a room impulse response), which is applied to incoming audio via a convolution engine. The latter passes the audio through a network of audio processing components (usually delays and attenuators) which attempt to mimic the interactions with room surfaces and the distances between them. Convolving audio with actual room samples can give a very convincing impression that those sounds were recorded in that space. However it is difficult to interact with the room samples to adjust them to suit a particular requirement. Algorithmic reverbs, on the other hand, are highly configurable but do not always provide the same level of realism.
The system developed at York models reverberation samples and extracts the parameters that describe how the sound develops as it travels through the space. Having these parameters available enables interaction with the sound produced in a way that is usually only possible with algorithmic reverbs, but the level of realism achieved with convolution reverbs is retained.

Of course, ‘realism’ may not be the ultimate aim in certain sound design situations, and one of the advantages of algorithmic reverbs is in being able to push parameters beyond physically likely, or even possible, settings. The new system allows such extremes to be applied to reverb samples.

The following examples illustrate some of the capabilities of the system.

The system can take an impulse response such as this (derived from measurements taken at the National Centre for Early Music, available here),…

…can model it and resynthesize it from that model,…

…and enables interactions with the model, such as extending the reverberation time (in this case from 1.4 to 10.0 seconds)….

Components, such as harmonically related room modes can be separated out from the room’s response…

…and extreme settings can be chosen without a deterioration in sound quality (here the reverb time has been increased to 79.1 seconds).

For comparison, this is what a typical ‘phase-vocoder’ based time stretching algorithm (Elastique Pro in Cockos Reaper) achieves when attempting to extend the reverberation time to 10.0 seconds.

The system was used extensively for Jez Wells’ recent installation at the National Centre for Early Music and has also been deployed on a new recording of British piano music that is due to be released later in 2017.

For more information please contact Jez Wells.

Researcher(s)