0.25
0.5
0.75
1.25
1.5
1.75
2
12 shades of RDF: Impact of Syntaxes on Data Extraction with Language
Published on Jun 17, 202444 Views
The fine-tuning of generative pre-trained language models (PLMs) on a new task can be impacted by the choice made for representing the inputs and outputs. This article focuses on the linearization pro
Related categories
Chapter list
12 shades of RDF Impact of Syntaxes on Data Extraction with Language Models00:00
I.RDF-Pattern Based Extraction00:28
RDF-Pattern Based Extraction00:34
12 shades of RDF RQ - 100:55
12 shades of RDF RQ - 201:10
12 shades of RDF RQ - 301:31
II. Graph linearization01:44
Encoder-decoder FT models for relation extraction via linearization01:47
Linearization proposed by the literature02:00
RDF syntaxes proposed by the W3C...02:19
6 or 12 shades of RDF ?02:28
Turtle Light02:38
Variations: Factorisation of triples02:52
Variations: One-line Turtle light ?03:16
III. Experimental Framework03:27
Ground-truth construction03:32
Shape definition03:50
Models : BART-base & T5-base FT - 104:13
Models : BART-base & T5-base FT - 204:25
Model tokenizers04:36
Evaluation - 105:10
Evaluation - 206:15
Experimental overview07:12
IV. And the Best syntax is ?07:23
IV. The best syntaxes - 107:32
IV. The best syntaxes - 207:36
IV. The best syntaxes - 307:56
IV. The best syntaxes - 408:24
IV. The best syntaxes - 508:35
IV. The best syntaxes - 609:51
IV. The best syntaxes - 710:22
V. Conclusions10:32
How does the choice of a syntax impact the generation of RDF triples using datatype properties?10:39
What did you wish you knew before starting this work?11:18
What is a key challenge going forward that your work gives rise to?11:52
Contact12:21
Let’s discuss !12:26