Anaphora and coreference resolution: still a hard nut to crack? How far has it gone, what is its impact on NLP and what are the ways forward?
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Anaphora and coreference resolution are arguably among the most challenging Natural Language Processing (NLP) tasks. Research in anaphora resolution and coreference resolution has focused almost exclusively on the development and intrinsic evaluation of various algorithms. While publications report positive results, the speaker shows that the replication of some of the best-known algorithms reveals reasons for concern in that the performance is far from ideal and the evaluation is far from transparent.
Anaphora and coreference resolution as tasks are crucial for the operation of NLP systems and should not be regarded in isolation but only in the wider picture of NLP applications. The extrinsic evaluation or the impact of an anaphora or coreference resolution module on a larger NLP system of which they are part, is an under-researched topic and several studies conducted by the speaker, seek to fill in this gap. More specifically, the speaker discusses whether anaphora and coreference resolution can improve (and if they can, to what extent?) or not the performance of four NLP applications: text summarisation, term extraction, text categorisation and textual entailment.
The presentation finishes with suggested ways forward as to how anaphora and coreference resolution can do better and outlines the latest related research of the author.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !