From multilingual to cross-lingual processing for Social Sciences and Humanities

author: Piek Vossen, Vrije Universiteit Amsterdam (VU)
published: Oct. 17, 2017,   recorded: September 2017,   views: 6
released under terms of: Creative Commons Attribution (CC-BY)
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

In this presentation I will talk about the NewsReader project that resulted in a reading machine for several languages that extracts event-centric-knowledge-graphs from texts that can be of interest to social sciences and humanities. How did we make sure that the processing of text is done in an interoperable way across these languages and given the different technological challenges for each? How do we represent the results of this processing in a uniform way to allow for differences and trace provenance relations?

I will focus on various aspects: 1. social sciences and humanities want semantic and pragmatic processing of text. How to achieve this in different languages and how to achieve interoperability across languages? 2. the role of language resources (lexicons and annotated corpora) for semantic and pragmatic processing of text and how to solve this for lesser-resourced languages 3. how to achieve interoperability in processing 4. how to achieve interoperability in the output of processing 5. how to deal with difference and provenance of these differences

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: