Grammatical Inference as a Principal Component Analysis Problem
published: Aug. 26, 2009, recorded: June 2009, views: 184
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
One of the main problems in probabilistic grammatical inference consists in inferring a stochastic language, i.e. a probability distribu- tion, in some class of probabilistic models, from a sample of words independently drawn according to a ﬁxed unknown target distribution p. Here we consider the class of rational stochastic languages composed of stochastic languages that can be computed by muliplicity automata, which can be viewed as a generalization of probabilistic automata. Rational stochastic languages p have a useful algebraic characterization: all the mappings up:v-¿p(uv) lie in a ﬁnite dimensional vector subspace Vp of the vector space R(E) composed of all real-valued functions deﬁned over E. Hence, a ﬁrst step in the grammatial inference process can consist in identifying the subspace Vp. In this paper, we study the possibility of using principal component analysis to achieve this task. We provide an inference algorithm which computes an estimate of the target distribution. We prove sometheoreticalpropertiesofthisalgorithmandweprovideresultsfromnumericalsimulationsthatconﬁrm the relevance of our approach.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !