Machine Learning with Knowledge Graphs

author: Volker Tresp, Siemens AG
published: July 30, 2014,   recorded: May 2014,   views: 14033


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


Most successful applications of statistical machine learning focus on response learning or signal-reaction learning where an output is produced as a direct response to an input. An important feature is a quick response time, the basis for, e.g., real-time ad-placement on the Web, real-time address reading in postal automation, or a fast reaction to threats for a biological being. One might argue that knowledge about specific world entities and their relationships is necessary if the complexity of an agent's world increases, for example if an agent needs to function in a complex social community. As one is quite aware in the Semantic Web community, a natural representation of knowledge about entities and their relationships is a directed labeled graph where nodes represent entities and where a labeled link stands for a true fact. A number of successful graph-based knowledge representations, such as DBpedia, YAGO, or the Google Knowledge Graph, have recently been developed and are the basis of applications ranging from the support of search to the realization of question answering systems. Statistical machine learning can play an important role in knowledge graphs as well. By exploiting statistical relational patterns one can predict the likelihood of new facts, find entity clusters and determine if two entities refer to the same real world object. Furthermore, one can analyze new entities and map them to existing entities (recognition) and predict likely relations for the new entity. These learning tasks can elegantly be approached by first transforming the knowledge graph into a 3-way tensor where two of the modes represent the entities in the domain and the third mode represents the relation type. Generalization is achieved by tensor factorization using, e.g., the RESCAL approach. A particular feature of RESCAL is that it exhibits collective learning where information can propagate in the knowledge graph to support a learning task. In the presentation the RESCAL approach will be introduced and applications of RESCAL to different learning and decision tasks will be presented.

See Also:

Download slides icon Download slides: eswc2014_tresp_machine_learning_01.pdf (4.1┬áMB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Shawna kristy, April 25, 2020 at 11:16 a.m.:

Well I think and I might be wrong in making my analysis that with the latest coming of Google BERT algorithm, things likely
to have changed in terms of benefits and advantages of using Schema markup in our content the way it was used to be in recent past and again website like have benefited greatly with that

Comment2 Monika Jacob, June 24, 2020 at 6:23 a.m.:

This presentation was great in all the aspects. I got a new concept about to solve polynomial basis functions. You are the second person that I saw so intelligent. This first was professional writer at https://www.affordable-dissertation.c...

Comment3 Serenity Reid, October 26, 2020 at 4:37 p.m.:

The Congress established three fundamental objectives for the monetary policy in the Federal Reserve Act. Read more details about it here

Write your own review or comment:

make sure you have javascript enabled or clear this field: