Transformative Machine Learning: Explicit is Better than Implicit
published: June 28, 2019, recorded: May 2019, views: 114
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The key to success in machine learning (ML) is the use of effective data representations. Formerly, ML was only applied to isolated problems. Now, with the ever-increasing availability of data, ML is being applied to large sets of related problems. In multi-task ML, and transfer ML, related problems are exploited to improve ML performance. My colleagues and I have developed transformative learning (TL): a novel and general ML representation for sets of related problems. TL has the dual advantages of improving ML performance, and enabling explainable predictions. The fundamental new idea is to transform standard data representations into an explicit representation based on the predictions of pre-trained models. We have evaluated TL using the four most important non-linear ML methods: random forests, support-vector machines, k-nearest neighbour, and neural-networks; on three real-world scientific problem areas: drug-design, predicting gene expression, and meta machine-learning. TL significantly improved the predictive performance of all four ML methods in all three areas. A valuable side-product of TL is the large-scale production of prediction models. We applied these models to cluster drug-targets/genes and drugs by functional similarity. We also used them to make large-scale drug-activity predictions, and gene-activity predictions.
Download slides: icgeb_king_transformative_machine_learning_01.pdf (2.9 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !