Understanding Metropolis-Hastings algorithm

Аватар автора
Мастер-класс по созданию умных чат-ботов
Metropolis-Hastings is an algorithm that allows us to sample from a generic probability distribution, which we&call our target distribution, even if we don&know the normalizing constant. To do this, we construct and sample from a Markov chain whose stationary distribution is the target distribution that we&looking for. It consists of picking an arbitrary starting value and then iteratively accepting or rejecting candidate samples drawn from another distribution, one that is easy to sample. Let&say we want to produce samples from a target distribution. We&going to call it p of theta. But we only know it up to a normalizing constant or up to proportionality. What we have is g of theta. So we don&know the normalizing constant because perhaps this is difficult to integrate. So we only have g of theta to work with. The Metropolis Hastings Algorithm will proceed as follows. The first step is to select an initial value for theta. We&going to call it theta-naught. The next step is for a large number of iterations, so for i from 1 up to some large number m, we&going to repeat the following. The first thing we&going to do is draw a candidate. We&call that theta-star as our candidate. And we&going to draw this from a proposal distribution. We&going to call the proposal distribution q of theta-star, given the previous iteration&value of theta. We&take more about this q distribution soon. The next step is to compute the following ratio. We&going to call this alpha. It is this g function...

0/0


0/0

0/0

0/0