Stochastic chains with variable length memory and the algorithm Context
published: Dec. 10, 2007, recorded: September 2007, views: 285
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Stochastic chains with variable length memory define an interesting family of stochastic chains of infinite order on a finite alphabet. The idea is that for each past, only a finite suffix of the past, called "context", is enough to predict the next symbol. The set of contexts can be represented by a rooted tree with finite labeled branches. The law of the chain is characterized by its tree of contexts and by an associated family of transition probabilities indexed by the tree. These models were first introduced in the information theory literature by Rissanen (1983) as an universal tool to perform data compression. Recently, they have been used to model up scientific data in areas as different as biology, linguistics and music. Originally called "finite memory source" or "tree machines", these models became quite popular in the statistics literature under the name of "Variable Length Markov Chains" coined by Buhlmann and Wyner (1999). In my talk I will present some of the basic ideas, problems and examples of application in the field. I will focus on the algorithm Context which estimates the tree of contexts and the associated family of transition probabilities defining the chain.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !