Abstract
Sound signals encoded over multiple loudspeakers can create the perception of specific spatial properties. Rendering sound with spatial properties is useful for creating virtual and immersive environments. A novel method for rendering audio signals using an arbitrary arrangement of loudspeakers is presented. The method matches a multipole expansion of the original source wavefield to the field created by the available loudspeakers. A Galerkin-based method-of-moments approach minimizes the error on a sphere around the listener's head while exploiting the orthogonality of the underlying basis functions to reduce computational complexity. The resulting overdetermined system of equations is solved via a singular-value decomposition to obtain the complex loudspeaker weights. This approach distinctly differs from the popular wave-field synthesis (WFS) method, which reconstructs the original sound field in a larger area of interest. Being a sweet-spot solution, this method renders virtual sources in a small area around a single listener's head, thereby reducing the number of loudspeakers needed for a comparable similar performance using WFS and can potentially be more useful in immersive environments. An example with a perimeter array of loudspeakers demonstrates the implementation of the method for a tone moving through a listener's environment. An examination of the number of modes necessary for convergence is presented along with resulting wavefield errors over the spatial region in which the sound is being rendered.
Original language | English |
---|---|
Pages (from-to) | 473-481 |
Number of pages | 9 |
Journal | AES: Journal of the Audio Engineering Society |
Volume | 56 |
Issue number | 6 |
State | Published - Jun 2008 |
ASJC Scopus subject areas
- General Engineering
- Music