Have We All Been Right? Looking Backwards at Linguistic Theory, Statistics, and Language Acquisition
author: Robert Freidin, Program in Linguistics, Princeton University
author: Jean-Roger Vergnaud
author: Norbert Hornstein, Department of Linguistics, University of Maryland
author: William Gregory Sakas, Computer Science Department, Hunter College
author: Anna Maria Di Sciullo, The Université du Québec à Montréal
published: Feb. 10, 2012, recorded: October 2007, views: 5810
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
It was uncertain by the end of this panel if linguists and computational scientists could find meaningful common ground. As conference organizer Michael Coen initially stated, “The issues we’re discussing are as religious to people as the Red Sox.” The two disciplines view their shared territory in distinctive ways, leading, in this panel and subsequent discussion, to some friction.
Moderator Charles Yang sums up the preceding talks, describing how presenters explored such issues as whether statistical models could adequately capture psychological and linguistic complexity, and whether the learning models fit the developmental data. He cites continued conundrums, such as “How does a child do something that is so apparently in contradiction with what’s in the data,” which he would like to see addressed in discussions of statistical learning of syntax.
Robert Friedin comments, “What I noticed in the presentations of modelers was that syntactic representations put forward were not syntactic representations that I would accept. There is an assumption in linguistics that language has a particular …syntactic structure and not another. … If you have a theory of grammar that gives you the right set of syntactic representations, you might want to say, let’s take that and now let’s see what else do we need to add to explain other things on the periphery.”
Jean-Roger Vergnaud is “puzzled by the approach” of some models that look at the distribution of data for the purpose of inferring grammar. He says, “I think there is a problem with standard treatments that purport to derive phrase structure or consistent structure just from examining strings.” Norbert Hornstein says, “I was amused that poverty of stimulus here was considered a problem.” Many people in this conference looked at it as a thing to solve, and “in my part of the world, it’s an extremely effective tool, not a problem -- a given, we know it exists.” He said that computationalists “seem to think we’re people who generate phrase structure grammars. …Frankly these are peripheral issues.” He notes that many syntacticians are interested in the nature of the initial state of the language faculty, and suggests it might be useful to ask how current statistical techniques could study this question. William Sakas repeats his request for “discussion about how statistical models might be scaled down to feasibly be embodied in a child.”
Anna Maria Di Sciullo says, “Probabilistic models have been said to be the models of language acquisition. If we look at human possession and acquisition of language, whether words, sentences or text, a human tends to have different behavior with respect to different sorts of structures.” Also, children don’t acquire language instantaneously, and instead go through a set of errors. She seems dubious that a model based on probability would be able to account for the kinds of nuanced patterns found in human language acquisition.
The question and answer period includes some energetic exchanges among panelists and conference participants, including Josh Tenenbaum, Lila Gleitman, Chris Manning, Amy Perfors, and Partha Niyogi.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !