相关链接:
3.2 Bayesian neural networks
贝叶斯神经网络将神经网络与贝叶斯学习相结合,通过推断网络参数
where
估计得到权重参数的后验分布后,the prediction output
上式的积分是intractable的,因此通常采用近似的手段,其中使用最广泛的近似方法是Monte Carlo Approximation。此方法遵循大数定律,通过N个随机网络的预测均值来逼近期望值,
Wilson and Izmailov (2020)认为BNN的一个关键优势就在于marginalization,它可以提高深度神经网络的accuracy and calibration。此外,BNN不仅局限于uncertainty estimation,还为深度学习提供了强大的Bayesian toolboxes,例如,Bayesian model selection, model compression, active learning, theoretic advances. 虽然这个formulation是很简单的,但是BNN仍然存在挑战,例如,对于后验推断通常不存在闭式解,因为复杂模型,如神经网络,一般不存在共轭先验(Bishop and Nasrabadi,2006),因此往往需要使用approximate Bayesian inference techniques来计算后验概率。然而,直接使用近似贝叶斯推断技术已被证明是困难的,因为DNN的数据量和参数量太大,即,上述积分在数据规模和参数量增长时不易计算。此外,为DNN指定有意义的prior是另一个挑战。
作者根据how the posterior distribution is inferred to approximate Bayesian inference将BNNs分为三种类别:
Variational inference approaches approximate the (in general intractable) posterior distribution by optimizing over a family of tractable distributions.
Sampling approaches deliver a representation of the target random variable from which realizations can be sampled. Such methods are based on Markov Chain Monte Carlo and further extensions.
Laplace approximation simplifies the target distribution by approximating the log-posterior distribution and then, based on this approximation, deriving a normal distribution over the network weights.
The goal of variational inference is to infer the posterior probabilities using a prespecifed family of distributions . Here, this so-called variational family is defined as a parametric distribution.
——变分推断的目标是利用预先指定的分布族 推断后验概率 。所谓的变分分布族被定义为一个参数化分布。比如,Multivariate Normal distribution的参数为均值和协方差矩阵;变分推断的主要思想是找到这些参数,使得接近所关注的后验概率。而概率分布之间的接近程度由Kullback-Leibler (KL) 散度给出:
由于KL散度中包含有后验因此无法直接优化,实际操作中是优化Evidence Lower Bound, ELBO:
而KL散度也可以写为
最小化KL散度实际上就是最大化ELBO,二者是一致的。
其它关键词(请读者自行查阅):重参数reparameterization;平均场假设mean-field approximations;Monte Carlo Dropout。
Sampling methods通常被称为Monte Carlo methods,是另一种贝叶斯推断算法。该方法从分布中抽取一组样本来获得后验,不受分布类型的限制hence probability distributions are obtained non-parametrically。流行的算法包括粒子滤波、拒绝采样、重要性采样和MCMC采样。在神经网络中,因为基于拒绝采样和重要性采样的方法对于高维问题来说十分低效,MCMC是普遍被使用的方法。MCMC的主要思想是通过transition in state space从而在任意分布中进行抽样,this transition is governed by a record of the current state and the proposal distribution that aims to estimate the target distribution (e.g. the true posterior). 为了进一步解释,定义:
a Markov Chain is a distribution over random variables which follows the state transition rule:
i.e. the next state only depends on the current state and not on any other former state.
即,下一时刻的状态只依赖于当前状态,而与之前的状态无关。
每一次采样都通过一定的规则选择接受或拒绝当前样本,这个过程一直进行下去,最终会保证在某个时间点后采样得到的样本近似来自于目标分布。
其它关键词:Hamiltonian Monte Carlo / Hybrid Monte Carlo; Stochastic Gradient Markov Chain Monte Carlo; Langevin dynamics.
其余请参见原文。
The goal of the Laplace Approximation is to estimate the posterior distribution over the parameters of neural networks around a local mode of the loss surface with a Multivariate Normal distribution.
The Laplace Approximation to the posterior can be obtained by taking the second-order Taylor series expansion of the log posterior over the weights around the MAP estimate given some data . If we assume a Gaussian prior with a scalar precision value , then this corresponds to the commonly used L2-regularization, and the Taylor series expansion results in
where the first-order term vanishes because the gradient of the log posterior is zero at the maximum . Taking the exponential on both sides and approximating integrals by reverse engineering densities, the weight posterior is approximately a Gaussian with the mean and the covariance matrix where is the Hessian of . This means that the model uncertainty is represented by the Hessian resulting in a Multivariate Normal distribution:
The core of the Laplace Approximation is the estimation of the Hessian.
贝叶斯深度学习已经逐渐成为一个热门且强大的研究领域,对BNN的研究主要集中在如何推断后验。此外,出现了一些新的挑战:
(i) how to specify meaningful priors?
(ii) how to efficiently marginalize over the parameters for fast predictive uncertainty?
(iii) infrastructures such as new benchmarks, evaluation protocols and software tools.
(iv) towards better understandings on the current methodologies and their potential applications.