Learning Representations of Large-scale Networks

author: Qiaozhu Mei, Department of Electrical Engineering and Computer Science, University of Michigan
author: Jian Tang, Montreal Institute for Learning Algorithms (MILA), University of Montreal
published: Nov. 21, 2017,   recorded: August 2017,   views: 1270
Categories

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 1:43:44
!NOW PLAYING
Watch Part 2
Part 2 1:07:51
!NOW PLAYING

Description

Large-scale networks such as social networks, citation networks, the World Wide Web, and traffic networks are ubiquitous in the real world. Networks can also be constructed from text, time series, behavior logs, and many other types of data. Mining network data attracts increasing attention in academia and industry, covers a variety of applications, and influences the methodology of mining many types of data. A prerequisite to network mining is to find an effective representation of networks, which largely determines the performance of downstream data mining tasks. Traditionally, networks are usually represented as adjacency matrices, which suffer from data sparsity and high-dimensionality. Recently, there is a fast-growing interest in learning continuous and low-dimensional representations of networks. This is a challenging problem for multiple reasons: (1) networks data (nodes and edges) are sparse, discrete, and globally interactive; (2) real-world networks are very large, usually containing millions of nodes and billions of edges; and (3) real-world networks are heterogeneous. Edges can be directed, undirected or weighted, and both nodes and edges may carry different semantics.

In this tutorial, we will introduce the recent progress on learning continuous and low-dimensional representations of large-scale networks. This includes methods that learn the embeddings of nodes, methods that learn representations of larger graph structures (e.g., an entire network), and methods that layout very large networks on extremely low (2D or 3D) dimensional spaces. We will introduce methods for learning different types of node representations: representations that can be used as features for node classification, community detection, link prediction, and network visualization. We will introduce end-to-end methods that learn the representation of the entire graph structure through directly optimizing tasks such as information cascade prediction, chemical compound classification, and protein structure classification, using deep neural networks. We will highlight open source implementations of these techniques.

Link to tutorial: https://sites.google.com/site/pkujiantang/home/kdd17-tutorial

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Iain, October 29, 2018 at 9:26 a.m.:

Useful methods for learning different types of node representation.
gmail sign up https://www.just4dummies.com/gmail-si...


Comment2 Mae Waters, December 8, 2021 at 7:12 p.m.:

Such an informative video speech that you have shared keep up the good work...
https://theonewebtechnology.com/blogs...


Comment3 Samantha Kim, June 11, 2022 at 3:12 p.m.:

Can you please tell me that where this video was recorded?
https://www.aios.org/blogs/facebook-l...

Write your own review or comment:

make sure you have javascript enabled or clear this field: