An Exploratory Analysis of Multi-Class Uncertainty Approximation in Bayesian Convolutional Neural Networks
MetadataShow full item record
Neural networks are an important and powerful family of models, but they have lacked practical ways of estimating predictive uncertainty. Recently, researchers from the Bayesian machine learning community developed a technique called Monte Carlo (MC) dropout which provides a theoretically grounded approach to estimating predictive uncertainty in dropout neural networks. Some researchers have developed ad hoc approximations of these uncertainty estimates for use in convolutional neural networks. We extend their research to a multi-class setting, and find that ad hoc approximations of predictive uncertainty in some cases provides useful information about a model’s confidence in its predictions. Furthermore, we develop a novel approximation of uncertainty that in some respects performs better than those currently being used. Finally, we test these approximations in practice and compare them to other methods suggested in the literature. In our setting we find that the ad hoc approximations perform adequately, but not as well as those already suggested by experts.