Hierarchical Bayesian Models for Audio and Music Processing
published: Dec. 29, 2007, recorded: December 2007, views: 727
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In recent years, there has been an increasing interest in statistical approaches and tools from machine learning for the analysis of audio and music signals, driven partially by applications in music information retrieval, computer aided music education and interactive music performance systems.
The application of statistical techniques is quite natural: acoustical time series can be conveniently modelled using hierarchical signal models by incorporating prior knowledge from various sources: from physics or studies of human cognition and perception. Once a realistic hierarchical model is constructed, many audio processing tasks such as coding, restoration, transcription, separation, identification or resynthesis can be formulated consistently as Bayesian posterior inference problems.
In this talk, we will review recent advances in various signal models for audio and music signal analysis. In particular, factorial switching state space models, Gamma-Markov random fields will be discussed. Some models admit exact inference, otherwise efficient algorithms based on variational or stochastic approximation methods can be developed. We will illustrate applications on music transcription, tempo tracking, restoration and source separation applications.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !