Combining Logic and Probability: Languages, Algorithms and Applications
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
AI problems are characterized by high degrees of complexity and uncertainty. Complexity is well handled by first-order logic, and uncertainty by probability. Combining the two in one language would be highly desirable, and the last decade has seen rapid progress in this direction. Many probabilistic logical languages have been proposed, and efficient inference and learning algorithms for them are available, often in open source software. Probabilistic logical techniques have been successfully applied to a wide variety of problems in natural language processing, vision, robotics, planning, social networks, the Web, and other areas. This tutorial begins with an overview of the key issues in this area and the solutions that have been proposed, from representation to learning and inference. As an example, we then focus on Markov logic, which attaches weights to first-order formulas and treats them as templates for features of log-linear models. We look in particular at the application of lifting techniques to probabilistic inference in relational domains, the combination of statistical learning with inductive logic programming (a.k.a. statistical relational learning), and the application of these techniques to machine reading.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !