Scalable and Reliable Inference for Probabilistic Modeling
Author | : Ruqi Zhang |
Publisher | : |
Total Pages | : 0 |
Release | : 2021 |
ISBN-10 | : OCLC:1404077346 |
ISBN-13 | : |
Rating | : 4/5 ( Downloads) |
Download or read book Scalable and Reliable Inference for Probabilistic Modeling written by Ruqi Zhang and published by . This book was released on 2021 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Probabilistic modeling, as known as probabilistic machine learning, provides a principled framework for learning from data, with the key advantage of offering rigorous solutions for uncertainty quantification. In the era of big and complex data, there is an urgent need for new inference methods in probabilistic modeling to extract information from data effectively and efficiently. This thesis shows how to do theoretically-guaranteed scalable and reliable inference for modern machine learning. Considering both theory and practice, we provide foundational understanding of scalable and reliable inference methods and practical algorithms of new inference methods, as well as extensive empirical evaluation on common machine learning and deep learning tasks. Classical inference algorithms, such as Markov chain Monte Carlo, have enabled probabilistic modeling to achieve gold standard results on many machine learning tasks. However, these algorithms are rarely used in modern machine learning due to the difficulty of scaling up to large datasets. Existing work suggests that there is an inherent trade-off between scalability and reliability, forcing practitioners to choose between expensive exact methods and biased scalable ones. To overcome the current trade-off, we introduce general and theoretically grounded frameworks to enable fast and asymptotically correct inference, with applications to Gibbs sampling, Metropolis-Hastings and Langevin dynamics. Deep neural networks (DNNs) have achieved impressive success on a variety of learning problems in recent years. However, DNNs have been criticized for being unable to estimate uncertainty accurately. Probabilistic modeling provides a principled alternative that can mitigate this issue; they are able to account for model uncertainty and achieve automatic complexity control. In this thesis, we analyze the key challenges of probabilistic inference in deep learning, and present novel approaches for fast posterior inference of neural network weights.