Building Chemogenomics Models from a Large-Scale Public Dataset and Applying them to Industrial Datasets

author: Noé Sturm, AstraZeneca
published: June 28, 2019,   recorded: May 2019,   views: 48
Categories

Slides

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

ExCAPE was a European funded project aiming at harvesting the power of supercomputers to speed up drug discovery (http://excape-h2020.eu/). Thanks to the project team, we were given the amazing opportunity to build large-scale machine learning models for compound activity predictions from public databases and to apply them to industrial datasets. In this talk, I will present the process of collecting chemogenomics data from public resources to build a benchmark dataset. Subsequently, I will explain the process of building and evaluating the performance of models built with multi-task deep learning and matrix factorization algorithms. Ultimately, I will show how these models were applied to industrial datasets.

See Also:

Download slides icon Download slides: icgeb_sturm_chemogenomics_models_01.pdf (5.1 MB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: