From multilingual to cross-lingual processing for Social Sciences and Humanities
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In this presentation I will talk about the NewsReader project that resulted in a reading machine for several languages that extracts event-centric-knowledge-graphs from texts that can be of interest to social sciences and humanities. How did we make sure that the processing of text is done in an interoperable way across these languages and given the different technological challenges for each? How do we represent the results of this processing in a uniform way to allow for differences and trace provenance relations?
I will focus on various aspects: 1. social sciences and humanities want semantic and pragmatic processing of text. How to achieve this in different languages and how to achieve interoperability across languages? 2. the role of language resources (lexicons and annotated corpora) for semantic and pragmatic processing of text and how to solve this for lesser-resourced languages 3. how to achieve interoperability in processing 4. how to achieve interoperability in the output of processing 5. how to deal with difference and provenance of these differences
Download slides: clarinannualconference2017_vossen_social_sciences_01.pdf (17.7 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !