Getting at the Semantics of Texts
published: Nov. 24, 2008, recorded: September 2008, views: 353
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
As semantic technologies keep evolving and maturing, there is growing concern about the gigantic wealth of knowledge encoded in so-called unstructured data. Actually the bulk of human knowledge on the web (and in books) is represented in texts. Not even the most optimistic proponents of semantic representation standards expect that this information will be rewritten or extensively complemented by semantic meta-data through intellectual labour. On the other hand, there is a discipline of science and technology called computational linguistics that has been concerned for several decades with the automatic analysis of human language. One of the original goals of this field was the automatic understanding of texts by translating them into a knowledge representation language that machines could use for reasoning. However, through sobering experience of the complexity of this task most applied computational linguists turned to easier challenges. There is now a wide variety of human language technologies, many of which have enabled new types of products. Among these applications are text classification, email response systems, text-to-speech software, grammar checking and statistical machine translation.
In this presentation, however, the state of the art and recent achievements in two strands of language technology will be explained and illustrated by examples. One of them is the automatic extraction of semantic relations, or more precisely of relation instances, from large volumes of texts. Such relation instances could be events, properties of objects, or opinions on products. Using results from our own research, I will demonstrate how machine learning techniques were combined with existing advanced language analysis methods for improving such an analysis beyond the best results achievable by either one of these approaches alone. I will also show how the semantic domain models can be utilized for improving the performance of the relation extraction.
The second strand of research to be presented is the deep syntactic and semantic analysis of human language. While most computational linguists had turned away from this fundamental challenge in favour of lower hanging fruit, a few groups continued the quest for text understanding. Because of the size of the problem and the desire to develop techniques that would work for more than language, several of them teamed up in international collaborations. I will briefly describe the two largest international collaborations in this area, the DELPH-IN initiative dedicated to deep language processing with HPSG and the PARGRAM initiative pursuing the same goal by LFG. HPSG and LFG are two advanced formal models of linguistic description developed in the seventies and eighties of last century. The results of the PARGRAM initiative were lead by PARC and are among the central assets of the search technology company Powerset which was recently acquired by Microsoft. The results of the DELPH-IN initiative are collected in growing a open-source repository of research resources. I will explain the significance of recent advances by these two consortia and related research activities.
In the conclusion of the talk I will argue that a combination of the machine-learning approach to relation extraction with the advances of the deep linguistic processing research will open the way to an exploitation of large volumes of unstructured textual data by semantic technologies.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !