On the choice of Markov Kernels for Approximate Bayesian Computation
Magid Maatallah
Corresponding Email: [email protected]
Received date: -
Accepted date: -
Abstract:
Approximate Bayesian computation has emerged as a standard computational tool when dealing with the increasingly common scenario of completely intractable likelihood functions in Bayesian inference. We show that many common Markov chain Monte Carlo kernels used to facilitate inference in this setting can fail to be variance bounding, and hence geometrically ergodic, which can have consequences for the reliability of estimates in practice. We then prove that a recently introduced Markov kernel in this setting can be variance bounding and geometrically ergodic whenever its intractable Metropolis-Hastings counterpart is, under reasonably weak and manageable conditions. We indicate that the computational cost of the latter kernel is bounded whenever the prior is proper, and present indicative results on an example where spectral gaps and asymptotic variances can be computed Motivated by these considerations we study both the variance bounding and geometric ergodicity properties of a number of reversible kernels used for approximate Bayesian computations.
Keywords: Approximate Bayesian computation; Markov chain Monte Carlo; Local adaptation