The motivation of continuous random variable mutual information by $k$'th nearest neighbour
$begingroup$
I am reading this paper Kraskov et al, 2004, Estimating Mutual Information on estimating the mutual information of two continuous random variables based on entropy estimates from $k$-nearest
neighbour distances. I do not quite understand the motivation is using $k$-nearest neighbour distances. Could someone elucidate it?
This paper is also referred to in this answer.
information-theory entropy
$endgroup$
add a comment |
$begingroup$
I am reading this paper Kraskov et al, 2004, Estimating Mutual Information on estimating the mutual information of two continuous random variables based on entropy estimates from $k$-nearest
neighbour distances. I do not quite understand the motivation is using $k$-nearest neighbour distances. Could someone elucidate it?
This paper is also referred to in this answer.
information-theory entropy
$endgroup$
$begingroup$
Is the question that you want the intuition behind it?
$endgroup$
– user3658307
Jan 12 at 2:38
$begingroup$
@user3658307: Yes, the intuitive motivation.
$endgroup$
– Hans
Jan 12 at 2:57
add a comment |
$begingroup$
I am reading this paper Kraskov et al, 2004, Estimating Mutual Information on estimating the mutual information of two continuous random variables based on entropy estimates from $k$-nearest
neighbour distances. I do not quite understand the motivation is using $k$-nearest neighbour distances. Could someone elucidate it?
This paper is also referred to in this answer.
information-theory entropy
$endgroup$
I am reading this paper Kraskov et al, 2004, Estimating Mutual Information on estimating the mutual information of two continuous random variables based on entropy estimates from $k$-nearest
neighbour distances. I do not quite understand the motivation is using $k$-nearest neighbour distances. Could someone elucidate it?
This paper is also referred to in this answer.
information-theory entropy
information-theory entropy
edited Jan 11 at 18:11
Hans
asked Jan 11 at 11:35
HansHans
4,99121032
4,99121032
$begingroup$
Is the question that you want the intuition behind it?
$endgroup$
– user3658307
Jan 12 at 2:38
$begingroup$
@user3658307: Yes, the intuitive motivation.
$endgroup$
– Hans
Jan 12 at 2:57
add a comment |
$begingroup$
Is the question that you want the intuition behind it?
$endgroup$
– user3658307
Jan 12 at 2:38
$begingroup$
@user3658307: Yes, the intuitive motivation.
$endgroup$
– Hans
Jan 12 at 2:57
$begingroup$
Is the question that you want the intuition behind it?
$endgroup$
– user3658307
Jan 12 at 2:38
$begingroup$
Is the question that you want the intuition behind it?
$endgroup$
– user3658307
Jan 12 at 2:38
$begingroup$
@user3658307: Yes, the intuitive motivation.
$endgroup$
– Hans
Jan 12 at 2:57
$begingroup$
@user3658307: Yes, the intuitive motivation.
$endgroup$
– Hans
Jan 12 at 2:57
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Here's the intuition from my perspective.
First at a high level, the mutual information (MI) is measuring the (expected, logarithmic) distance between the joint distribution $p(x,y)$ and the product of the marginals $p(x)p(y)$. This means you need some kind of way to estimate densities. The obvious way is to make a histogram, and count the (normalized) number of points falling into each bin.
However, keep in mind that we are ultimately interested in the probability density, which is directly related to the sample density. But, just intuitively, samples are dense in a particular location if there are many of them close together; i.e., the distances to their nearest neighbours are small. In other words, areas of high density have densely packed samples, meaning the distance between nearby points is small. Conversely, areas of low density have few samples, and so the distance to their closest neighbours will be large.
Hence, the paper tries to
estimate MI from k-nearest neighbour statistics
So basically we can just think of using neighbour distances as another proxy for sample density, instead of counting samples in bins. It sort of reminds me of kernel density estimation.
More concretely, for this paper, however, they are thinking more about a different perspective on the mutual information, namely as the difference between the independent (marginal) entropies and the joint entropy. They then use estimators for the entropies, based on the neighbour statistics.
Another intuition, I think, is that counting samples based on local neighbourhood distances (in the joint space) makes the algorithm adaptive (as opposed to choosing a single bin size over the whole sample space).
$endgroup$
$begingroup$
What is "it" referring to in "It sort of reminds me of kernel density estimation", the $k$-nearest neighbour or counting samples in bins? To me, the kernel density estimation is akin to the histogram only that this histogram is overlapping and "soft". What puzzles is then which $k$ one should choose? Should we sum or average over all $k$'s or leave it as a hyperparameter to be determined by, say, cross-validation? Also, which is better, $k$-nearest neighbour or kenel density estimation?
$endgroup$
– Hans
Jan 12 at 7:40
$begingroup$
By the way, you may be interested in and have an opinion on my stats.stackexchange.com question stats.stackexchange.com/q/386101/44368.
$endgroup$
– Hans
Jan 12 at 7:45
$begingroup$
@Hans $k$ is chosen as a fixed value, I believe. Some discussion of the tradeoff of higher vs lower $k$ is talked about in the paper (e.g., computational cost, relation to the scale of the data). You have to choose $k$ based on these considerations. KDE is indeed like a soft histogram. "It" was just referring to the nearest neighbour method. KDE also has several (difficult) parameters to choose: e.g., which kernel? which bandwidth? I am not an expert in this area, but I suspect that both methods can be accurate in practice, but it depends on the chosen parameters and the data :)
$endgroup$
– user3658307
Jan 12 at 14:52
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3069744%2fthe-motivation-of-continuous-random-variable-mutual-information-by-kth-neares%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Here's the intuition from my perspective.
First at a high level, the mutual information (MI) is measuring the (expected, logarithmic) distance between the joint distribution $p(x,y)$ and the product of the marginals $p(x)p(y)$. This means you need some kind of way to estimate densities. The obvious way is to make a histogram, and count the (normalized) number of points falling into each bin.
However, keep in mind that we are ultimately interested in the probability density, which is directly related to the sample density. But, just intuitively, samples are dense in a particular location if there are many of them close together; i.e., the distances to their nearest neighbours are small. In other words, areas of high density have densely packed samples, meaning the distance between nearby points is small. Conversely, areas of low density have few samples, and so the distance to their closest neighbours will be large.
Hence, the paper tries to
estimate MI from k-nearest neighbour statistics
So basically we can just think of using neighbour distances as another proxy for sample density, instead of counting samples in bins. It sort of reminds me of kernel density estimation.
More concretely, for this paper, however, they are thinking more about a different perspective on the mutual information, namely as the difference between the independent (marginal) entropies and the joint entropy. They then use estimators for the entropies, based on the neighbour statistics.
Another intuition, I think, is that counting samples based on local neighbourhood distances (in the joint space) makes the algorithm adaptive (as opposed to choosing a single bin size over the whole sample space).
$endgroup$
$begingroup$
What is "it" referring to in "It sort of reminds me of kernel density estimation", the $k$-nearest neighbour or counting samples in bins? To me, the kernel density estimation is akin to the histogram only that this histogram is overlapping and "soft". What puzzles is then which $k$ one should choose? Should we sum or average over all $k$'s or leave it as a hyperparameter to be determined by, say, cross-validation? Also, which is better, $k$-nearest neighbour or kenel density estimation?
$endgroup$
– Hans
Jan 12 at 7:40
$begingroup$
By the way, you may be interested in and have an opinion on my stats.stackexchange.com question stats.stackexchange.com/q/386101/44368.
$endgroup$
– Hans
Jan 12 at 7:45
$begingroup$
@Hans $k$ is chosen as a fixed value, I believe. Some discussion of the tradeoff of higher vs lower $k$ is talked about in the paper (e.g., computational cost, relation to the scale of the data). You have to choose $k$ based on these considerations. KDE is indeed like a soft histogram. "It" was just referring to the nearest neighbour method. KDE also has several (difficult) parameters to choose: e.g., which kernel? which bandwidth? I am not an expert in this area, but I suspect that both methods can be accurate in practice, but it depends on the chosen parameters and the data :)
$endgroup$
– user3658307
Jan 12 at 14:52
add a comment |
$begingroup$
Here's the intuition from my perspective.
First at a high level, the mutual information (MI) is measuring the (expected, logarithmic) distance between the joint distribution $p(x,y)$ and the product of the marginals $p(x)p(y)$. This means you need some kind of way to estimate densities. The obvious way is to make a histogram, and count the (normalized) number of points falling into each bin.
However, keep in mind that we are ultimately interested in the probability density, which is directly related to the sample density. But, just intuitively, samples are dense in a particular location if there are many of them close together; i.e., the distances to their nearest neighbours are small. In other words, areas of high density have densely packed samples, meaning the distance between nearby points is small. Conversely, areas of low density have few samples, and so the distance to their closest neighbours will be large.
Hence, the paper tries to
estimate MI from k-nearest neighbour statistics
So basically we can just think of using neighbour distances as another proxy for sample density, instead of counting samples in bins. It sort of reminds me of kernel density estimation.
More concretely, for this paper, however, they are thinking more about a different perspective on the mutual information, namely as the difference between the independent (marginal) entropies and the joint entropy. They then use estimators for the entropies, based on the neighbour statistics.
Another intuition, I think, is that counting samples based on local neighbourhood distances (in the joint space) makes the algorithm adaptive (as opposed to choosing a single bin size over the whole sample space).
$endgroup$
$begingroup$
What is "it" referring to in "It sort of reminds me of kernel density estimation", the $k$-nearest neighbour or counting samples in bins? To me, the kernel density estimation is akin to the histogram only that this histogram is overlapping and "soft". What puzzles is then which $k$ one should choose? Should we sum or average over all $k$'s or leave it as a hyperparameter to be determined by, say, cross-validation? Also, which is better, $k$-nearest neighbour or kenel density estimation?
$endgroup$
– Hans
Jan 12 at 7:40
$begingroup$
By the way, you may be interested in and have an opinion on my stats.stackexchange.com question stats.stackexchange.com/q/386101/44368.
$endgroup$
– Hans
Jan 12 at 7:45
$begingroup$
@Hans $k$ is chosen as a fixed value, I believe. Some discussion of the tradeoff of higher vs lower $k$ is talked about in the paper (e.g., computational cost, relation to the scale of the data). You have to choose $k$ based on these considerations. KDE is indeed like a soft histogram. "It" was just referring to the nearest neighbour method. KDE also has several (difficult) parameters to choose: e.g., which kernel? which bandwidth? I am not an expert in this area, but I suspect that both methods can be accurate in practice, but it depends on the chosen parameters and the data :)
$endgroup$
– user3658307
Jan 12 at 14:52
add a comment |
$begingroup$
Here's the intuition from my perspective.
First at a high level, the mutual information (MI) is measuring the (expected, logarithmic) distance between the joint distribution $p(x,y)$ and the product of the marginals $p(x)p(y)$. This means you need some kind of way to estimate densities. The obvious way is to make a histogram, and count the (normalized) number of points falling into each bin.
However, keep in mind that we are ultimately interested in the probability density, which is directly related to the sample density. But, just intuitively, samples are dense in a particular location if there are many of them close together; i.e., the distances to their nearest neighbours are small. In other words, areas of high density have densely packed samples, meaning the distance between nearby points is small. Conversely, areas of low density have few samples, and so the distance to their closest neighbours will be large.
Hence, the paper tries to
estimate MI from k-nearest neighbour statistics
So basically we can just think of using neighbour distances as another proxy for sample density, instead of counting samples in bins. It sort of reminds me of kernel density estimation.
More concretely, for this paper, however, they are thinking more about a different perspective on the mutual information, namely as the difference between the independent (marginal) entropies and the joint entropy. They then use estimators for the entropies, based on the neighbour statistics.
Another intuition, I think, is that counting samples based on local neighbourhood distances (in the joint space) makes the algorithm adaptive (as opposed to choosing a single bin size over the whole sample space).
$endgroup$
Here's the intuition from my perspective.
First at a high level, the mutual information (MI) is measuring the (expected, logarithmic) distance between the joint distribution $p(x,y)$ and the product of the marginals $p(x)p(y)$. This means you need some kind of way to estimate densities. The obvious way is to make a histogram, and count the (normalized) number of points falling into each bin.
However, keep in mind that we are ultimately interested in the probability density, which is directly related to the sample density. But, just intuitively, samples are dense in a particular location if there are many of them close together; i.e., the distances to their nearest neighbours are small. In other words, areas of high density have densely packed samples, meaning the distance between nearby points is small. Conversely, areas of low density have few samples, and so the distance to their closest neighbours will be large.
Hence, the paper tries to
estimate MI from k-nearest neighbour statistics
So basically we can just think of using neighbour distances as another proxy for sample density, instead of counting samples in bins. It sort of reminds me of kernel density estimation.
More concretely, for this paper, however, they are thinking more about a different perspective on the mutual information, namely as the difference between the independent (marginal) entropies and the joint entropy. They then use estimators for the entropies, based on the neighbour statistics.
Another intuition, I think, is that counting samples based on local neighbourhood distances (in the joint space) makes the algorithm adaptive (as opposed to choosing a single bin size over the whole sample space).
answered Jan 12 at 4:28
user3658307user3658307
4,6433946
4,6433946
$begingroup$
What is "it" referring to in "It sort of reminds me of kernel density estimation", the $k$-nearest neighbour or counting samples in bins? To me, the kernel density estimation is akin to the histogram only that this histogram is overlapping and "soft". What puzzles is then which $k$ one should choose? Should we sum or average over all $k$'s or leave it as a hyperparameter to be determined by, say, cross-validation? Also, which is better, $k$-nearest neighbour or kenel density estimation?
$endgroup$
– Hans
Jan 12 at 7:40
$begingroup$
By the way, you may be interested in and have an opinion on my stats.stackexchange.com question stats.stackexchange.com/q/386101/44368.
$endgroup$
– Hans
Jan 12 at 7:45
$begingroup$
@Hans $k$ is chosen as a fixed value, I believe. Some discussion of the tradeoff of higher vs lower $k$ is talked about in the paper (e.g., computational cost, relation to the scale of the data). You have to choose $k$ based on these considerations. KDE is indeed like a soft histogram. "It" was just referring to the nearest neighbour method. KDE also has several (difficult) parameters to choose: e.g., which kernel? which bandwidth? I am not an expert in this area, but I suspect that both methods can be accurate in practice, but it depends on the chosen parameters and the data :)
$endgroup$
– user3658307
Jan 12 at 14:52
add a comment |
$begingroup$
What is "it" referring to in "It sort of reminds me of kernel density estimation", the $k$-nearest neighbour or counting samples in bins? To me, the kernel density estimation is akin to the histogram only that this histogram is overlapping and "soft". What puzzles is then which $k$ one should choose? Should we sum or average over all $k$'s or leave it as a hyperparameter to be determined by, say, cross-validation? Also, which is better, $k$-nearest neighbour or kenel density estimation?
$endgroup$
– Hans
Jan 12 at 7:40
$begingroup$
By the way, you may be interested in and have an opinion on my stats.stackexchange.com question stats.stackexchange.com/q/386101/44368.
$endgroup$
– Hans
Jan 12 at 7:45
$begingroup$
@Hans $k$ is chosen as a fixed value, I believe. Some discussion of the tradeoff of higher vs lower $k$ is talked about in the paper (e.g., computational cost, relation to the scale of the data). You have to choose $k$ based on these considerations. KDE is indeed like a soft histogram. "It" was just referring to the nearest neighbour method. KDE also has several (difficult) parameters to choose: e.g., which kernel? which bandwidth? I am not an expert in this area, but I suspect that both methods can be accurate in practice, but it depends on the chosen parameters and the data :)
$endgroup$
– user3658307
Jan 12 at 14:52
$begingroup$
What is "it" referring to in "It sort of reminds me of kernel density estimation", the $k$-nearest neighbour or counting samples in bins? To me, the kernel density estimation is akin to the histogram only that this histogram is overlapping and "soft". What puzzles is then which $k$ one should choose? Should we sum or average over all $k$'s or leave it as a hyperparameter to be determined by, say, cross-validation? Also, which is better, $k$-nearest neighbour or kenel density estimation?
$endgroup$
– Hans
Jan 12 at 7:40
$begingroup$
What is "it" referring to in "It sort of reminds me of kernel density estimation", the $k$-nearest neighbour or counting samples in bins? To me, the kernel density estimation is akin to the histogram only that this histogram is overlapping and "soft". What puzzles is then which $k$ one should choose? Should we sum or average over all $k$'s or leave it as a hyperparameter to be determined by, say, cross-validation? Also, which is better, $k$-nearest neighbour or kenel density estimation?
$endgroup$
– Hans
Jan 12 at 7:40
$begingroup$
By the way, you may be interested in and have an opinion on my stats.stackexchange.com question stats.stackexchange.com/q/386101/44368.
$endgroup$
– Hans
Jan 12 at 7:45
$begingroup$
By the way, you may be interested in and have an opinion on my stats.stackexchange.com question stats.stackexchange.com/q/386101/44368.
$endgroup$
– Hans
Jan 12 at 7:45
$begingroup$
@Hans $k$ is chosen as a fixed value, I believe. Some discussion of the tradeoff of higher vs lower $k$ is talked about in the paper (e.g., computational cost, relation to the scale of the data). You have to choose $k$ based on these considerations. KDE is indeed like a soft histogram. "It" was just referring to the nearest neighbour method. KDE also has several (difficult) parameters to choose: e.g., which kernel? which bandwidth? I am not an expert in this area, but I suspect that both methods can be accurate in practice, but it depends on the chosen parameters and the data :)
$endgroup$
– user3658307
Jan 12 at 14:52
$begingroup$
@Hans $k$ is chosen as a fixed value, I believe. Some discussion of the tradeoff of higher vs lower $k$ is talked about in the paper (e.g., computational cost, relation to the scale of the data). You have to choose $k$ based on these considerations. KDE is indeed like a soft histogram. "It" was just referring to the nearest neighbour method. KDE also has several (difficult) parameters to choose: e.g., which kernel? which bandwidth? I am not an expert in this area, but I suspect that both methods can be accurate in practice, but it depends on the chosen parameters and the data :)
$endgroup$
– user3658307
Jan 12 at 14:52
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3069744%2fthe-motivation-of-continuous-random-variable-mutual-information-by-kth-neares%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Is the question that you want the intuition behind it?
$endgroup$
– user3658307
Jan 12 at 2:38
$begingroup$
@user3658307: Yes, the intuitive motivation.
$endgroup$
– Hans
Jan 12 at 2:57