Parallel Exact Inference on Multi-Core Processors
published: Jan. 19, 2010, recorded: December 2009, views: 4242
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Exact inference in Bayesian networks is a fundamental AI technique that has numerous applications including medical diagnosis, consumer help desk, pattern recognition, credit assessment, data mining, genetics, and others. Inference is NP-hard and in many applications real time performance is required. In this talk we show task and data parallel techniques to achieve scalable performance on general purpose multi-core and heterogeneous multi-core architectures. We develop collaborative schedulers to dynamically map the junction tree tasks leading to highly optimized implementations. We design lock-free structures to reduce thread coordination overheads in scheduling, while balancing the load across the threads. For the Cell BE, we develop a light weight centralized scheduler that coordinates the activities of the synergistic processing elements (SPEs). Our scheduler is further optimized to run on throughput oriented architectures such as SUN Niagara processors. We demonstrate scalable and efficient implementations using Pthreads for a wide class of Bayesian networks with various topologies, clique widths, and number of states of random variables. Our implementations show improved performance compared with OpenMP and complier based optimizations.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !