Monte Carlo Markov Chain - Metropolis-Hastings - Estimation of parameters
up vote
0
down vote
favorite
I have 6 parameters to estimate : $p=(theta=[a,b]$, $nu=[r_0,c_0,alpha,beta])$ with Bayesian and MCMC methods :
$$text{PSF}(r,c) = bigg(1 + dfrac{r^2 + c^2}{alpha^2}bigg)^{-beta}$$
and the modeling :
$$d(r,c) = a cdot text {PSF}_{alpha,beta} (r-r_0,c-c_0) + b + epsilon (r, c)$$
with $epsilon$ being a white gaussian noise.
This is the matrix form :
$$begin{bmatrix}d(1,1) \ d(1,2) \ d(1,3) \ vdots \ d(20,20) end{bmatrix}
= begin{bmatrix} text{PSF}_{alpha,beta}(1-r_0,1-c_0) & 1 \ text{PSF}_{alpha,beta}(1-r_0,2-c_0) & 1 \ text{PSF}_{alpha,beta}(1-r_0,3-c_0) & 1 \ vdots & vdots \ text{PSF}_{alpha,beta}(20-r_0,20-c_0) & 1 end{bmatrix} times begin{bmatrix}a \ b end{bmatrix}
+ begin{bmatrix}epsilon(1,1) \ epsilon(1,2) \ epsilon(1,3) \ vdots \ epsilon(20,20) end{bmatrix}$$
So we can write for the 1D data vector "d" :
$$d=H(nu),theta + epsilon$$ with $H$ the matrix defined above.
I know that we have the relation for posterior function ($d$=data and $p$ = vector of parameters :
$$f(p|d) = dfrac{f(p),f(d|p)}{int_{p}f(d|p),text{d}p}$$
We can also write :
$$f(p|d) propto text{Likelihood(d|p)},f(p)quad(1)$$ with $f(p)$ the prior function (that I can take as uniform distribution).
Now, how can I estimate the parameters $p$ from this relation $(1)$ with MCMC methods, especially in my case with Metropolis algorithm ?
With likelihood method, I used previously the following cost function to estimate these 6 parameters :
function cost = Crit_J(p,D)
% Compute the model corresponding to parameters p
[R,C] = size(D);
[Cols,Rows] = meshgrid(1:C,1:R);
% Model
Model = (1+((Rows-p(3)).^2+(Cols-p(4)).^2)/p(5)^2).^(-p(6));
model = Model(:);
d = D(:);
% Introduce H matrix
H = [ model, ones(length(model),1)];
% Compute the cost function : taking absolute value
cost = abs((d-H*[p(1),p(2)]')'*(d-H*[p(1),p(2)]'));
end
And after, I perform these estimations with "Matlab fminsearch
" function to find a local minimum.
Here, at first sight, I thought that I have to compute the distribution for each $p$ parameters generated randomly, but it seems to be more subtl.
Question 1) How can we proove that, starting from arbitrary values for parameters $p$, the Metropolis algorithm will converge to right estimations ?
Question 2) How to implement it for this concrete example ?
Any help is welcome, regards
maximum-likelihood monte-carlo log-likelihood
add a comment |
up vote
0
down vote
favorite
I have 6 parameters to estimate : $p=(theta=[a,b]$, $nu=[r_0,c_0,alpha,beta])$ with Bayesian and MCMC methods :
$$text{PSF}(r,c) = bigg(1 + dfrac{r^2 + c^2}{alpha^2}bigg)^{-beta}$$
and the modeling :
$$d(r,c) = a cdot text {PSF}_{alpha,beta} (r-r_0,c-c_0) + b + epsilon (r, c)$$
with $epsilon$ being a white gaussian noise.
This is the matrix form :
$$begin{bmatrix}d(1,1) \ d(1,2) \ d(1,3) \ vdots \ d(20,20) end{bmatrix}
= begin{bmatrix} text{PSF}_{alpha,beta}(1-r_0,1-c_0) & 1 \ text{PSF}_{alpha,beta}(1-r_0,2-c_0) & 1 \ text{PSF}_{alpha,beta}(1-r_0,3-c_0) & 1 \ vdots & vdots \ text{PSF}_{alpha,beta}(20-r_0,20-c_0) & 1 end{bmatrix} times begin{bmatrix}a \ b end{bmatrix}
+ begin{bmatrix}epsilon(1,1) \ epsilon(1,2) \ epsilon(1,3) \ vdots \ epsilon(20,20) end{bmatrix}$$
So we can write for the 1D data vector "d" :
$$d=H(nu),theta + epsilon$$ with $H$ the matrix defined above.
I know that we have the relation for posterior function ($d$=data and $p$ = vector of parameters :
$$f(p|d) = dfrac{f(p),f(d|p)}{int_{p}f(d|p),text{d}p}$$
We can also write :
$$f(p|d) propto text{Likelihood(d|p)},f(p)quad(1)$$ with $f(p)$ the prior function (that I can take as uniform distribution).
Now, how can I estimate the parameters $p$ from this relation $(1)$ with MCMC methods, especially in my case with Metropolis algorithm ?
With likelihood method, I used previously the following cost function to estimate these 6 parameters :
function cost = Crit_J(p,D)
% Compute the model corresponding to parameters p
[R,C] = size(D);
[Cols,Rows] = meshgrid(1:C,1:R);
% Model
Model = (1+((Rows-p(3)).^2+(Cols-p(4)).^2)/p(5)^2).^(-p(6));
model = Model(:);
d = D(:);
% Introduce H matrix
H = [ model, ones(length(model),1)];
% Compute the cost function : taking absolute value
cost = abs((d-H*[p(1),p(2)]')'*(d-H*[p(1),p(2)]'));
end
And after, I perform these estimations with "Matlab fminsearch
" function to find a local minimum.
Here, at first sight, I thought that I have to compute the distribution for each $p$ parameters generated randomly, but it seems to be more subtl.
Question 1) How can we proove that, starting from arbitrary values for parameters $p$, the Metropolis algorithm will converge to right estimations ?
Question 2) How to implement it for this concrete example ?
Any help is welcome, regards
maximum-likelihood monte-carlo log-likelihood
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have 6 parameters to estimate : $p=(theta=[a,b]$, $nu=[r_0,c_0,alpha,beta])$ with Bayesian and MCMC methods :
$$text{PSF}(r,c) = bigg(1 + dfrac{r^2 + c^2}{alpha^2}bigg)^{-beta}$$
and the modeling :
$$d(r,c) = a cdot text {PSF}_{alpha,beta} (r-r_0,c-c_0) + b + epsilon (r, c)$$
with $epsilon$ being a white gaussian noise.
This is the matrix form :
$$begin{bmatrix}d(1,1) \ d(1,2) \ d(1,3) \ vdots \ d(20,20) end{bmatrix}
= begin{bmatrix} text{PSF}_{alpha,beta}(1-r_0,1-c_0) & 1 \ text{PSF}_{alpha,beta}(1-r_0,2-c_0) & 1 \ text{PSF}_{alpha,beta}(1-r_0,3-c_0) & 1 \ vdots & vdots \ text{PSF}_{alpha,beta}(20-r_0,20-c_0) & 1 end{bmatrix} times begin{bmatrix}a \ b end{bmatrix}
+ begin{bmatrix}epsilon(1,1) \ epsilon(1,2) \ epsilon(1,3) \ vdots \ epsilon(20,20) end{bmatrix}$$
So we can write for the 1D data vector "d" :
$$d=H(nu),theta + epsilon$$ with $H$ the matrix defined above.
I know that we have the relation for posterior function ($d$=data and $p$ = vector of parameters :
$$f(p|d) = dfrac{f(p),f(d|p)}{int_{p}f(d|p),text{d}p}$$
We can also write :
$$f(p|d) propto text{Likelihood(d|p)},f(p)quad(1)$$ with $f(p)$ the prior function (that I can take as uniform distribution).
Now, how can I estimate the parameters $p$ from this relation $(1)$ with MCMC methods, especially in my case with Metropolis algorithm ?
With likelihood method, I used previously the following cost function to estimate these 6 parameters :
function cost = Crit_J(p,D)
% Compute the model corresponding to parameters p
[R,C] = size(D);
[Cols,Rows] = meshgrid(1:C,1:R);
% Model
Model = (1+((Rows-p(3)).^2+(Cols-p(4)).^2)/p(5)^2).^(-p(6));
model = Model(:);
d = D(:);
% Introduce H matrix
H = [ model, ones(length(model),1)];
% Compute the cost function : taking absolute value
cost = abs((d-H*[p(1),p(2)]')'*(d-H*[p(1),p(2)]'));
end
And after, I perform these estimations with "Matlab fminsearch
" function to find a local minimum.
Here, at first sight, I thought that I have to compute the distribution for each $p$ parameters generated randomly, but it seems to be more subtl.
Question 1) How can we proove that, starting from arbitrary values for parameters $p$, the Metropolis algorithm will converge to right estimations ?
Question 2) How to implement it for this concrete example ?
Any help is welcome, regards
maximum-likelihood monte-carlo log-likelihood
I have 6 parameters to estimate : $p=(theta=[a,b]$, $nu=[r_0,c_0,alpha,beta])$ with Bayesian and MCMC methods :
$$text{PSF}(r,c) = bigg(1 + dfrac{r^2 + c^2}{alpha^2}bigg)^{-beta}$$
and the modeling :
$$d(r,c) = a cdot text {PSF}_{alpha,beta} (r-r_0,c-c_0) + b + epsilon (r, c)$$
with $epsilon$ being a white gaussian noise.
This is the matrix form :
$$begin{bmatrix}d(1,1) \ d(1,2) \ d(1,3) \ vdots \ d(20,20) end{bmatrix}
= begin{bmatrix} text{PSF}_{alpha,beta}(1-r_0,1-c_0) & 1 \ text{PSF}_{alpha,beta}(1-r_0,2-c_0) & 1 \ text{PSF}_{alpha,beta}(1-r_0,3-c_0) & 1 \ vdots & vdots \ text{PSF}_{alpha,beta}(20-r_0,20-c_0) & 1 end{bmatrix} times begin{bmatrix}a \ b end{bmatrix}
+ begin{bmatrix}epsilon(1,1) \ epsilon(1,2) \ epsilon(1,3) \ vdots \ epsilon(20,20) end{bmatrix}$$
So we can write for the 1D data vector "d" :
$$d=H(nu),theta + epsilon$$ with $H$ the matrix defined above.
I know that we have the relation for posterior function ($d$=data and $p$ = vector of parameters :
$$f(p|d) = dfrac{f(p),f(d|p)}{int_{p}f(d|p),text{d}p}$$
We can also write :
$$f(p|d) propto text{Likelihood(d|p)},f(p)quad(1)$$ with $f(p)$ the prior function (that I can take as uniform distribution).
Now, how can I estimate the parameters $p$ from this relation $(1)$ with MCMC methods, especially in my case with Metropolis algorithm ?
With likelihood method, I used previously the following cost function to estimate these 6 parameters :
function cost = Crit_J(p,D)
% Compute the model corresponding to parameters p
[R,C] = size(D);
[Cols,Rows] = meshgrid(1:C,1:R);
% Model
Model = (1+((Rows-p(3)).^2+(Cols-p(4)).^2)/p(5)^2).^(-p(6));
model = Model(:);
d = D(:);
% Introduce H matrix
H = [ model, ones(length(model),1)];
% Compute the cost function : taking absolute value
cost = abs((d-H*[p(1),p(2)]')'*(d-H*[p(1),p(2)]'));
end
And after, I perform these estimations with "Matlab fminsearch
" function to find a local minimum.
Here, at first sight, I thought that I have to compute the distribution for each $p$ parameters generated randomly, but it seems to be more subtl.
Question 1) How can we proove that, starting from arbitrary values for parameters $p$, the Metropolis algorithm will converge to right estimations ?
Question 2) How to implement it for this concrete example ?
Any help is welcome, regards
maximum-likelihood monte-carlo log-likelihood
maximum-likelihood monte-carlo log-likelihood
asked 20 hours ago
youpilat13
2811
2811
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3004781%2fmonte-carlo-markov-chain-metropolis-hastings-estimation-of-parameters%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown