Can a summation be transferred into the denominator?
$begingroup$
I include a bit of an introduction, even though my main question is more mathematical. I was tasked with finding the Maximum Likelihood Estimate for $theta$ in $$mathrm P(X>x) = left(frac ax right)^theta $$ where $X$ is a variable, and $x$ represents a value that variable can take on.
The Probability Density Function is $newcommanddiff[2]{frac{mathrm d#1}{mathrm d#2}}diff Fx=frac{-theta a^theta}{x^{theta + 1}}$, where $F = mathrm P(X>x)$. I maximise the loglikelihood function $l = ln(-theta) + theta ln a - (theta + 1)ln x $ to get $hattheta(x_i) = frac 1{ln x_i - ln a}$, where the $hat.$ indicates that $hattheta$ is an estimate of $theta$, based on the data sample. Now, the answer is supposed to be $$hattheta = frac 1{overline {ln x} - ln a}$$ where $overline {phantom{x}}$ indicates the average: $overline{ln x} = frac 1n sum_i ln x_i$. I am stumped as to how to get this answer directly from $hattheta(x_i)$.
Does $$frac 1n sum_i frac 1{ln x_i - ln a} = frac 1{overline {ln x} - ln a}qquad ?$$
I think $frac 1n sum_i widehat{frac 1{theta(x_i)}} = frac 1nsum_i (ln x_i - ln a) =overline{ln x} - ln a = widehat {frac 1theta} implies hattheta = frac 1{overline{ln x} - ln a}$, but is this the only way to show the above?
statistics summation means maximum-likelihood
$endgroup$
add a comment |
$begingroup$
I include a bit of an introduction, even though my main question is more mathematical. I was tasked with finding the Maximum Likelihood Estimate for $theta$ in $$mathrm P(X>x) = left(frac ax right)^theta $$ where $X$ is a variable, and $x$ represents a value that variable can take on.
The Probability Density Function is $newcommanddiff[2]{frac{mathrm d#1}{mathrm d#2}}diff Fx=frac{-theta a^theta}{x^{theta + 1}}$, where $F = mathrm P(X>x)$. I maximise the loglikelihood function $l = ln(-theta) + theta ln a - (theta + 1)ln x $ to get $hattheta(x_i) = frac 1{ln x_i - ln a}$, where the $hat.$ indicates that $hattheta$ is an estimate of $theta$, based on the data sample. Now, the answer is supposed to be $$hattheta = frac 1{overline {ln x} - ln a}$$ where $overline {phantom{x}}$ indicates the average: $overline{ln x} = frac 1n sum_i ln x_i$. I am stumped as to how to get this answer directly from $hattheta(x_i)$.
Does $$frac 1n sum_i frac 1{ln x_i - ln a} = frac 1{overline {ln x} - ln a}qquad ?$$
I think $frac 1n sum_i widehat{frac 1{theta(x_i)}} = frac 1nsum_i (ln x_i - ln a) =overline{ln x} - ln a = widehat {frac 1theta} implies hattheta = frac 1{overline{ln x} - ln a}$, but is this the only way to show the above?
statistics summation means maximum-likelihood
$endgroup$
$begingroup$
Several errors in the post. The probability density function is the derivative of the distribution function $P(Xle x)$. And please mention the support/domain of the density, where it is defined. This is important to derive the maximum likelihood estimator.
$endgroup$
– StubbornAtom
Jan 29 at 16:24
1
$begingroup$
Looks like $a$ is known here; this should be mentioned.
$endgroup$
– StubbornAtom
Jan 29 at 16:45
$begingroup$
@StubbornAtom yes, I can see now that my PDF was calculated incorrectly. I think I included the introduction to check if I got any of it wrong. $a$ is "known". See the original document here (only a few cut pages).
$endgroup$
– ahorn
Jan 30 at 6:31
$begingroup$
@ahorn Is there still a question which has to be clarified?
$endgroup$
– callculus
Jan 30 at 20:53
add a comment |
$begingroup$
I include a bit of an introduction, even though my main question is more mathematical. I was tasked with finding the Maximum Likelihood Estimate for $theta$ in $$mathrm P(X>x) = left(frac ax right)^theta $$ where $X$ is a variable, and $x$ represents a value that variable can take on.
The Probability Density Function is $newcommanddiff[2]{frac{mathrm d#1}{mathrm d#2}}diff Fx=frac{-theta a^theta}{x^{theta + 1}}$, where $F = mathrm P(X>x)$. I maximise the loglikelihood function $l = ln(-theta) + theta ln a - (theta + 1)ln x $ to get $hattheta(x_i) = frac 1{ln x_i - ln a}$, where the $hat.$ indicates that $hattheta$ is an estimate of $theta$, based on the data sample. Now, the answer is supposed to be $$hattheta = frac 1{overline {ln x} - ln a}$$ where $overline {phantom{x}}$ indicates the average: $overline{ln x} = frac 1n sum_i ln x_i$. I am stumped as to how to get this answer directly from $hattheta(x_i)$.
Does $$frac 1n sum_i frac 1{ln x_i - ln a} = frac 1{overline {ln x} - ln a}qquad ?$$
I think $frac 1n sum_i widehat{frac 1{theta(x_i)}} = frac 1nsum_i (ln x_i - ln a) =overline{ln x} - ln a = widehat {frac 1theta} implies hattheta = frac 1{overline{ln x} - ln a}$, but is this the only way to show the above?
statistics summation means maximum-likelihood
$endgroup$
I include a bit of an introduction, even though my main question is more mathematical. I was tasked with finding the Maximum Likelihood Estimate for $theta$ in $$mathrm P(X>x) = left(frac ax right)^theta $$ where $X$ is a variable, and $x$ represents a value that variable can take on.
The Probability Density Function is $newcommanddiff[2]{frac{mathrm d#1}{mathrm d#2}}diff Fx=frac{-theta a^theta}{x^{theta + 1}}$, where $F = mathrm P(X>x)$. I maximise the loglikelihood function $l = ln(-theta) + theta ln a - (theta + 1)ln x $ to get $hattheta(x_i) = frac 1{ln x_i - ln a}$, where the $hat.$ indicates that $hattheta$ is an estimate of $theta$, based on the data sample. Now, the answer is supposed to be $$hattheta = frac 1{overline {ln x} - ln a}$$ where $overline {phantom{x}}$ indicates the average: $overline{ln x} = frac 1n sum_i ln x_i$. I am stumped as to how to get this answer directly from $hattheta(x_i)$.
Does $$frac 1n sum_i frac 1{ln x_i - ln a} = frac 1{overline {ln x} - ln a}qquad ?$$
I think $frac 1n sum_i widehat{frac 1{theta(x_i)}} = frac 1nsum_i (ln x_i - ln a) =overline{ln x} - ln a = widehat {frac 1theta} implies hattheta = frac 1{overline{ln x} - ln a}$, but is this the only way to show the above?
statistics summation means maximum-likelihood
statistics summation means maximum-likelihood
asked Jan 29 at 15:23


ahornahorn
1,3371929
1,3371929
$begingroup$
Several errors in the post. The probability density function is the derivative of the distribution function $P(Xle x)$. And please mention the support/domain of the density, where it is defined. This is important to derive the maximum likelihood estimator.
$endgroup$
– StubbornAtom
Jan 29 at 16:24
1
$begingroup$
Looks like $a$ is known here; this should be mentioned.
$endgroup$
– StubbornAtom
Jan 29 at 16:45
$begingroup$
@StubbornAtom yes, I can see now that my PDF was calculated incorrectly. I think I included the introduction to check if I got any of it wrong. $a$ is "known". See the original document here (only a few cut pages).
$endgroup$
– ahorn
Jan 30 at 6:31
$begingroup$
@ahorn Is there still a question which has to be clarified?
$endgroup$
– callculus
Jan 30 at 20:53
add a comment |
$begingroup$
Several errors in the post. The probability density function is the derivative of the distribution function $P(Xle x)$. And please mention the support/domain of the density, where it is defined. This is important to derive the maximum likelihood estimator.
$endgroup$
– StubbornAtom
Jan 29 at 16:24
1
$begingroup$
Looks like $a$ is known here; this should be mentioned.
$endgroup$
– StubbornAtom
Jan 29 at 16:45
$begingroup$
@StubbornAtom yes, I can see now that my PDF was calculated incorrectly. I think I included the introduction to check if I got any of it wrong. $a$ is "known". See the original document here (only a few cut pages).
$endgroup$
– ahorn
Jan 30 at 6:31
$begingroup$
@ahorn Is there still a question which has to be clarified?
$endgroup$
– callculus
Jan 30 at 20:53
$begingroup$
Several errors in the post. The probability density function is the derivative of the distribution function $P(Xle x)$. And please mention the support/domain of the density, where it is defined. This is important to derive the maximum likelihood estimator.
$endgroup$
– StubbornAtom
Jan 29 at 16:24
$begingroup$
Several errors in the post. The probability density function is the derivative of the distribution function $P(Xle x)$. And please mention the support/domain of the density, where it is defined. This is important to derive the maximum likelihood estimator.
$endgroup$
– StubbornAtom
Jan 29 at 16:24
1
1
$begingroup$
Looks like $a$ is known here; this should be mentioned.
$endgroup$
– StubbornAtom
Jan 29 at 16:45
$begingroup$
Looks like $a$ is known here; this should be mentioned.
$endgroup$
– StubbornAtom
Jan 29 at 16:45
$begingroup$
@StubbornAtom yes, I can see now that my PDF was calculated incorrectly. I think I included the introduction to check if I got any of it wrong. $a$ is "known". See the original document here (only a few cut pages).
$endgroup$
– ahorn
Jan 30 at 6:31
$begingroup$
@StubbornAtom yes, I can see now that my PDF was calculated incorrectly. I think I included the introduction to check if I got any of it wrong. $a$ is "known". See the original document here (only a few cut pages).
$endgroup$
– ahorn
Jan 30 at 6:31
$begingroup$
@ahorn Is there still a question which has to be clarified?
$endgroup$
– callculus
Jan 30 at 20:53
$begingroup$
@ahorn Is there still a question which has to be clarified?
$endgroup$
– callculus
Jan 30 at 20:53
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Usually the maximum likelihood estimator is calculated based on the pdf of $X$.
If $P(X> x)=left( frac{alpha}{x}right)^{theta}$ then $P(Xleq x)=1-left( frac{alpha}{x}right)^{theta}$
Differentating w.r.t. $theta$
$f_X(x)=frac{theta a^theta}{x^{theta + 1}}$
Then the likelihood function is
$L(theta)=Pi_{i=1}^n frac{theta a^theta}{x_i^{theta + 1}}$
$L(theta)=theta^n a^{thetacdot n}cdot Pi_{i=1}^n x_i^{-1-theta}$
Taking ln
$$ln L=ncdot ln (theta)+ncdot thetacdot ln (alpha)+sumlimits_{i=1}^{n} (-1-theta)cdot ln (x_i)$$
I think you can differentiate $ln L$, set it equal to $0$ and solve the equation for $theta$.
The result is indeed $hattheta = frac 1{overline {ln x} - ln a}$, where $overline {ln x}=frac{ 1}{n}sumlimits_{i=1}^{n} ln (x_i)$
$endgroup$
$begingroup$
I'm just stuck on the part where you state what the likelihood function is. I haven't dealt with likelihood functions much (and I just assumed it would be the PDF), so perhaps I could find out more on Wikipedia?
$endgroup$
– ahorn
Jan 31 at 3:57
$begingroup$
@ahorn Sure you can make a search at the net or just here at MSE. The key word is "Maximum likelihood estimation (MLE)". In my opinion this site is more comprehensible than Wiki. Basically you have an observation of $n$ data. They are fix. Then you multiply n-times the probability density function (pdf) where the observed data $x_i$ are given. This is the likelihood function. In your case the parameter $alpha$ is given as well. Now you are looking for the maximum of the likelihood function,
$endgroup$
– callculus
Jan 31 at 6:07
$begingroup$
@ahorn (Continued): In many cases it can be done by differentiating the likelihood function or even by differentiating the logarithm of the function. But the MLE cannot always found by differentiation. An example is the uniform distribution.
$endgroup$
– callculus
Jan 31 at 6:11
$begingroup$
I think I can think of the likelihood function as evaluating the n-dimensional PDF with domain $prod_{i=1}^nX_i$ at the point $mathbf x = (x_1, x_2, dots , x_n)$, then maximizing that value w.r.t. $theta$.
$endgroup$
– ahorn
Jan 31 at 8:29
$begingroup$
@ahorn It sounds like a useful interpretation.
$endgroup$
– callculus
Jan 31 at 9:41
add a comment |
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3092304%2fcan-a-summation-be-transferred-into-the-denominator%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Usually the maximum likelihood estimator is calculated based on the pdf of $X$.
If $P(X> x)=left( frac{alpha}{x}right)^{theta}$ then $P(Xleq x)=1-left( frac{alpha}{x}right)^{theta}$
Differentating w.r.t. $theta$
$f_X(x)=frac{theta a^theta}{x^{theta + 1}}$
Then the likelihood function is
$L(theta)=Pi_{i=1}^n frac{theta a^theta}{x_i^{theta + 1}}$
$L(theta)=theta^n a^{thetacdot n}cdot Pi_{i=1}^n x_i^{-1-theta}$
Taking ln
$$ln L=ncdot ln (theta)+ncdot thetacdot ln (alpha)+sumlimits_{i=1}^{n} (-1-theta)cdot ln (x_i)$$
I think you can differentiate $ln L$, set it equal to $0$ and solve the equation for $theta$.
The result is indeed $hattheta = frac 1{overline {ln x} - ln a}$, where $overline {ln x}=frac{ 1}{n}sumlimits_{i=1}^{n} ln (x_i)$
$endgroup$
$begingroup$
I'm just stuck on the part where you state what the likelihood function is. I haven't dealt with likelihood functions much (and I just assumed it would be the PDF), so perhaps I could find out more on Wikipedia?
$endgroup$
– ahorn
Jan 31 at 3:57
$begingroup$
@ahorn Sure you can make a search at the net or just here at MSE. The key word is "Maximum likelihood estimation (MLE)". In my opinion this site is more comprehensible than Wiki. Basically you have an observation of $n$ data. They are fix. Then you multiply n-times the probability density function (pdf) where the observed data $x_i$ are given. This is the likelihood function. In your case the parameter $alpha$ is given as well. Now you are looking for the maximum of the likelihood function,
$endgroup$
– callculus
Jan 31 at 6:07
$begingroup$
@ahorn (Continued): In many cases it can be done by differentiating the likelihood function or even by differentiating the logarithm of the function. But the MLE cannot always found by differentiation. An example is the uniform distribution.
$endgroup$
– callculus
Jan 31 at 6:11
$begingroup$
I think I can think of the likelihood function as evaluating the n-dimensional PDF with domain $prod_{i=1}^nX_i$ at the point $mathbf x = (x_1, x_2, dots , x_n)$, then maximizing that value w.r.t. $theta$.
$endgroup$
– ahorn
Jan 31 at 8:29
$begingroup$
@ahorn It sounds like a useful interpretation.
$endgroup$
– callculus
Jan 31 at 9:41
add a comment |
$begingroup$
Usually the maximum likelihood estimator is calculated based on the pdf of $X$.
If $P(X> x)=left( frac{alpha}{x}right)^{theta}$ then $P(Xleq x)=1-left( frac{alpha}{x}right)^{theta}$
Differentating w.r.t. $theta$
$f_X(x)=frac{theta a^theta}{x^{theta + 1}}$
Then the likelihood function is
$L(theta)=Pi_{i=1}^n frac{theta a^theta}{x_i^{theta + 1}}$
$L(theta)=theta^n a^{thetacdot n}cdot Pi_{i=1}^n x_i^{-1-theta}$
Taking ln
$$ln L=ncdot ln (theta)+ncdot thetacdot ln (alpha)+sumlimits_{i=1}^{n} (-1-theta)cdot ln (x_i)$$
I think you can differentiate $ln L$, set it equal to $0$ and solve the equation for $theta$.
The result is indeed $hattheta = frac 1{overline {ln x} - ln a}$, where $overline {ln x}=frac{ 1}{n}sumlimits_{i=1}^{n} ln (x_i)$
$endgroup$
$begingroup$
I'm just stuck on the part where you state what the likelihood function is. I haven't dealt with likelihood functions much (and I just assumed it would be the PDF), so perhaps I could find out more on Wikipedia?
$endgroup$
– ahorn
Jan 31 at 3:57
$begingroup$
@ahorn Sure you can make a search at the net or just here at MSE. The key word is "Maximum likelihood estimation (MLE)". In my opinion this site is more comprehensible than Wiki. Basically you have an observation of $n$ data. They are fix. Then you multiply n-times the probability density function (pdf) where the observed data $x_i$ are given. This is the likelihood function. In your case the parameter $alpha$ is given as well. Now you are looking for the maximum of the likelihood function,
$endgroup$
– callculus
Jan 31 at 6:07
$begingroup$
@ahorn (Continued): In many cases it can be done by differentiating the likelihood function or even by differentiating the logarithm of the function. But the MLE cannot always found by differentiation. An example is the uniform distribution.
$endgroup$
– callculus
Jan 31 at 6:11
$begingroup$
I think I can think of the likelihood function as evaluating the n-dimensional PDF with domain $prod_{i=1}^nX_i$ at the point $mathbf x = (x_1, x_2, dots , x_n)$, then maximizing that value w.r.t. $theta$.
$endgroup$
– ahorn
Jan 31 at 8:29
$begingroup$
@ahorn It sounds like a useful interpretation.
$endgroup$
– callculus
Jan 31 at 9:41
add a comment |
$begingroup$
Usually the maximum likelihood estimator is calculated based on the pdf of $X$.
If $P(X> x)=left( frac{alpha}{x}right)^{theta}$ then $P(Xleq x)=1-left( frac{alpha}{x}right)^{theta}$
Differentating w.r.t. $theta$
$f_X(x)=frac{theta a^theta}{x^{theta + 1}}$
Then the likelihood function is
$L(theta)=Pi_{i=1}^n frac{theta a^theta}{x_i^{theta + 1}}$
$L(theta)=theta^n a^{thetacdot n}cdot Pi_{i=1}^n x_i^{-1-theta}$
Taking ln
$$ln L=ncdot ln (theta)+ncdot thetacdot ln (alpha)+sumlimits_{i=1}^{n} (-1-theta)cdot ln (x_i)$$
I think you can differentiate $ln L$, set it equal to $0$ and solve the equation for $theta$.
The result is indeed $hattheta = frac 1{overline {ln x} - ln a}$, where $overline {ln x}=frac{ 1}{n}sumlimits_{i=1}^{n} ln (x_i)$
$endgroup$
Usually the maximum likelihood estimator is calculated based on the pdf of $X$.
If $P(X> x)=left( frac{alpha}{x}right)^{theta}$ then $P(Xleq x)=1-left( frac{alpha}{x}right)^{theta}$
Differentating w.r.t. $theta$
$f_X(x)=frac{theta a^theta}{x^{theta + 1}}$
Then the likelihood function is
$L(theta)=Pi_{i=1}^n frac{theta a^theta}{x_i^{theta + 1}}$
$L(theta)=theta^n a^{thetacdot n}cdot Pi_{i=1}^n x_i^{-1-theta}$
Taking ln
$$ln L=ncdot ln (theta)+ncdot thetacdot ln (alpha)+sumlimits_{i=1}^{n} (-1-theta)cdot ln (x_i)$$
I think you can differentiate $ln L$, set it equal to $0$ and solve the equation for $theta$.
The result is indeed $hattheta = frac 1{overline {ln x} - ln a}$, where $overline {ln x}=frac{ 1}{n}sumlimits_{i=1}^{n} ln (x_i)$
edited Jan 29 at 16:44
answered Jan 29 at 16:37


callculuscallculus
18.6k31428
18.6k31428
$begingroup$
I'm just stuck on the part where you state what the likelihood function is. I haven't dealt with likelihood functions much (and I just assumed it would be the PDF), so perhaps I could find out more on Wikipedia?
$endgroup$
– ahorn
Jan 31 at 3:57
$begingroup$
@ahorn Sure you can make a search at the net or just here at MSE. The key word is "Maximum likelihood estimation (MLE)". In my opinion this site is more comprehensible than Wiki. Basically you have an observation of $n$ data. They are fix. Then you multiply n-times the probability density function (pdf) where the observed data $x_i$ are given. This is the likelihood function. In your case the parameter $alpha$ is given as well. Now you are looking for the maximum of the likelihood function,
$endgroup$
– callculus
Jan 31 at 6:07
$begingroup$
@ahorn (Continued): In many cases it can be done by differentiating the likelihood function or even by differentiating the logarithm of the function. But the MLE cannot always found by differentiation. An example is the uniform distribution.
$endgroup$
– callculus
Jan 31 at 6:11
$begingroup$
I think I can think of the likelihood function as evaluating the n-dimensional PDF with domain $prod_{i=1}^nX_i$ at the point $mathbf x = (x_1, x_2, dots , x_n)$, then maximizing that value w.r.t. $theta$.
$endgroup$
– ahorn
Jan 31 at 8:29
$begingroup$
@ahorn It sounds like a useful interpretation.
$endgroup$
– callculus
Jan 31 at 9:41
add a comment |
$begingroup$
I'm just stuck on the part where you state what the likelihood function is. I haven't dealt with likelihood functions much (and I just assumed it would be the PDF), so perhaps I could find out more on Wikipedia?
$endgroup$
– ahorn
Jan 31 at 3:57
$begingroup$
@ahorn Sure you can make a search at the net or just here at MSE. The key word is "Maximum likelihood estimation (MLE)". In my opinion this site is more comprehensible than Wiki. Basically you have an observation of $n$ data. They are fix. Then you multiply n-times the probability density function (pdf) where the observed data $x_i$ are given. This is the likelihood function. In your case the parameter $alpha$ is given as well. Now you are looking for the maximum of the likelihood function,
$endgroup$
– callculus
Jan 31 at 6:07
$begingroup$
@ahorn (Continued): In many cases it can be done by differentiating the likelihood function or even by differentiating the logarithm of the function. But the MLE cannot always found by differentiation. An example is the uniform distribution.
$endgroup$
– callculus
Jan 31 at 6:11
$begingroup$
I think I can think of the likelihood function as evaluating the n-dimensional PDF with domain $prod_{i=1}^nX_i$ at the point $mathbf x = (x_1, x_2, dots , x_n)$, then maximizing that value w.r.t. $theta$.
$endgroup$
– ahorn
Jan 31 at 8:29
$begingroup$
@ahorn It sounds like a useful interpretation.
$endgroup$
– callculus
Jan 31 at 9:41
$begingroup$
I'm just stuck on the part where you state what the likelihood function is. I haven't dealt with likelihood functions much (and I just assumed it would be the PDF), so perhaps I could find out more on Wikipedia?
$endgroup$
– ahorn
Jan 31 at 3:57
$begingroup$
I'm just stuck on the part where you state what the likelihood function is. I haven't dealt with likelihood functions much (and I just assumed it would be the PDF), so perhaps I could find out more on Wikipedia?
$endgroup$
– ahorn
Jan 31 at 3:57
$begingroup$
@ahorn Sure you can make a search at the net or just here at MSE. The key word is "Maximum likelihood estimation (MLE)". In my opinion this site is more comprehensible than Wiki. Basically you have an observation of $n$ data. They are fix. Then you multiply n-times the probability density function (pdf) where the observed data $x_i$ are given. This is the likelihood function. In your case the parameter $alpha$ is given as well. Now you are looking for the maximum of the likelihood function,
$endgroup$
– callculus
Jan 31 at 6:07
$begingroup$
@ahorn Sure you can make a search at the net or just here at MSE. The key word is "Maximum likelihood estimation (MLE)". In my opinion this site is more comprehensible than Wiki. Basically you have an observation of $n$ data. They are fix. Then you multiply n-times the probability density function (pdf) where the observed data $x_i$ are given. This is the likelihood function. In your case the parameter $alpha$ is given as well. Now you are looking for the maximum of the likelihood function,
$endgroup$
– callculus
Jan 31 at 6:07
$begingroup$
@ahorn (Continued): In many cases it can be done by differentiating the likelihood function or even by differentiating the logarithm of the function. But the MLE cannot always found by differentiation. An example is the uniform distribution.
$endgroup$
– callculus
Jan 31 at 6:11
$begingroup$
@ahorn (Continued): In many cases it can be done by differentiating the likelihood function or even by differentiating the logarithm of the function. But the MLE cannot always found by differentiation. An example is the uniform distribution.
$endgroup$
– callculus
Jan 31 at 6:11
$begingroup$
I think I can think of the likelihood function as evaluating the n-dimensional PDF with domain $prod_{i=1}^nX_i$ at the point $mathbf x = (x_1, x_2, dots , x_n)$, then maximizing that value w.r.t. $theta$.
$endgroup$
– ahorn
Jan 31 at 8:29
$begingroup$
I think I can think of the likelihood function as evaluating the n-dimensional PDF with domain $prod_{i=1}^nX_i$ at the point $mathbf x = (x_1, x_2, dots , x_n)$, then maximizing that value w.r.t. $theta$.
$endgroup$
– ahorn
Jan 31 at 8:29
$begingroup$
@ahorn It sounds like a useful interpretation.
$endgroup$
– callculus
Jan 31 at 9:41
$begingroup$
@ahorn It sounds like a useful interpretation.
$endgroup$
– callculus
Jan 31 at 9:41
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3092304%2fcan-a-summation-be-transferred-into-the-denominator%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Several errors in the post. The probability density function is the derivative of the distribution function $P(Xle x)$. And please mention the support/domain of the density, where it is defined. This is important to derive the maximum likelihood estimator.
$endgroup$
– StubbornAtom
Jan 29 at 16:24
1
$begingroup$
Looks like $a$ is known here; this should be mentioned.
$endgroup$
– StubbornAtom
Jan 29 at 16:45
$begingroup$
@StubbornAtom yes, I can see now that my PDF was calculated incorrectly. I think I included the introduction to check if I got any of it wrong. $a$ is "known". See the original document here (only a few cut pages).
$endgroup$
– ahorn
Jan 30 at 6:31
$begingroup$
@ahorn Is there still a question which has to be clarified?
$endgroup$
– callculus
Jan 30 at 20:53