Quantifying dependence of Cauchy random variables












7












$begingroup$


Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.



Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?










share|cite|improve this question









$endgroup$








  • 3




    $begingroup$
    May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
    $endgroup$
    – John Madden
    Jan 7 at 18:03
















7












$begingroup$


Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.



Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?










share|cite|improve this question









$endgroup$








  • 3




    $begingroup$
    May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
    $endgroup$
    – John Madden
    Jan 7 at 18:03














7












7








7





$begingroup$


Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.



Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?










share|cite|improve this question









$endgroup$




Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.



Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?







covariance independence copula heavy-tailed






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 7 at 17:49









JonasJonas

48211




48211








  • 3




    $begingroup$
    May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
    $endgroup$
    – John Madden
    Jan 7 at 18:03














  • 3




    $begingroup$
    May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
    $endgroup$
    – John Madden
    Jan 7 at 18:03








3




3




$begingroup$
May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
$endgroup$
– John Madden
Jan 7 at 18:03




$begingroup$
May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
$endgroup$
– John Madden
Jan 7 at 18:03










2 Answers
2






active

oldest

votes


















9












$begingroup$

Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



$$mu = sum w_ix_i/sum w_i$$



$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$



$$w_i = (1+k)/(1+s_i)$$



The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
    $endgroup$
    – Jonas
    Jan 9 at 7:04










  • $begingroup$
    Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
    $endgroup$
    – jbowman
    Jan 9 at 13:56



















6












$begingroup$

While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.



Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.





¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
    $endgroup$
    – Jonas
    Jan 9 at 7:20










  • $begingroup$
    My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the copula keyword on Wikipedia.
    $endgroup$
    – Xi'an
    Jan 9 at 7:29











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f386036%2fquantifying-dependence-of-cauchy-random-variables%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









9












$begingroup$

Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



$$mu = sum w_ix_i/sum w_i$$



$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$



$$w_i = (1+k)/(1+s_i)$$



The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
    $endgroup$
    – Jonas
    Jan 9 at 7:04










  • $begingroup$
    Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
    $endgroup$
    – jbowman
    Jan 9 at 13:56
















9












$begingroup$

Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



$$mu = sum w_ix_i/sum w_i$$



$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$



$$w_i = (1+k)/(1+s_i)$$



The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
    $endgroup$
    – Jonas
    Jan 9 at 7:04










  • $begingroup$
    Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
    $endgroup$
    – jbowman
    Jan 9 at 13:56














9












9








9





$begingroup$

Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



$$mu = sum w_ix_i/sum w_i$$



$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$



$$w_i = (1+k)/(1+s_i)$$



The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.






share|cite|improve this answer











$endgroup$



Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



$$mu = sum w_ix_i/sum w_i$$



$$Sigma = {1 over n}sum w_i(x_i-mu)(x_i-mu)^T$$



$$w_i = (1+k)/(1+s_i)$$



The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 7 at 22:35

























answered Jan 7 at 18:22









jbowmanjbowman

24k34479




24k34479












  • $begingroup$
    That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
    $endgroup$
    – Jonas
    Jan 9 at 7:04










  • $begingroup$
    Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
    $endgroup$
    – jbowman
    Jan 9 at 13:56


















  • $begingroup$
    That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
    $endgroup$
    – Jonas
    Jan 9 at 7:04










  • $begingroup$
    Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
    $endgroup$
    – jbowman
    Jan 9 at 13:56
















$begingroup$
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
$endgroup$
– Jonas
Jan 9 at 7:04




$begingroup$
That is a very good plan and a very detailed answer. One more question may be: Is it possible to write any joint Cauchy distribution like you did? For Gaussians, a similar answer is yes. But also for Gaussians correlation and dependence are equivalent. Is that also the case for Cauchy?
$endgroup$
– Jonas
Jan 9 at 7:04












$begingroup$
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
$endgroup$
– jbowman
Jan 9 at 13:56




$begingroup$
Yes, this is the standard way of writing a multivariate Cauchy density. For the MV Cauchy, pseudo-correlation and dependence are also equivalent; all your intuitions carry over. $sigma_{ij} = sigma_isigma_j$ implies $x_i$ always $ = x_j$, etc.
$endgroup$
– jbowman
Jan 9 at 13:56













6












$begingroup$

While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.



Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.





¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
    $endgroup$
    – Jonas
    Jan 9 at 7:20










  • $begingroup$
    My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the copula keyword on Wikipedia.
    $endgroup$
    – Xi'an
    Jan 9 at 7:29
















6












$begingroup$

While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.



Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.





¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
    $endgroup$
    – Jonas
    Jan 9 at 7:20










  • $begingroup$
    My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the copula keyword on Wikipedia.
    $endgroup$
    – Xi'an
    Jan 9 at 7:29














6












6








6





$begingroup$

While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.



Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.





¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.






share|cite|improve this answer











$endgroup$



While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for, e.g., bounded functions $Phi(cdot)$. Actually, the notion of covariance matrix is not well-suited to describe joint distributions in every setting, as it is not invariant under transformations.



Borrowing from the concept of copulas (which may also help in defining a joint distribution¹ for $(X,Y)$), one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)simmathcal{U}(0,1)$ and $Phi_Y(Y)simmathcal{U}(0,1)$, and look at the covariance or correlation of the resulting variates.





¹For instance, when $X$ and $Y$ are both standard Cauchys,$$Z_X=Phi^{-1}({argtan(X)/pi+1}/2)$$is distributed as a standard Normal, and the joint distribution of $(Z_X,Z_Y)$ can be chosen to be a joint Normal
$$(Z_X,Z_Y) sim mathcal{N}_2(0_2,Sigma)$$This is a Gaussian copula.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 7 at 21:04

























answered Jan 7 at 18:12









Xi'anXi'an

55.2k792353




55.2k792353












  • $begingroup$
    Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
    $endgroup$
    – Jonas
    Jan 9 at 7:20










  • $begingroup$
    My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the copula keyword on Wikipedia.
    $endgroup$
    – Xi'an
    Jan 9 at 7:29


















  • $begingroup$
    Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
    $endgroup$
    – Jonas
    Jan 9 at 7:20










  • $begingroup$
    My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the copula keyword on Wikipedia.
    $endgroup$
    – Xi'an
    Jan 9 at 7:29
















$begingroup$
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
$endgroup$
– Jonas
Jan 9 at 7:20




$begingroup$
Thank you for your answer. I am not entirely sure though, whether this the right way to go. Values sampled with the Cauchy distribution will potentially be very large. When transforming them like this to a Gaussian, we probably end up putting all values in a very small set at the tail of the Gaussian. In which case, we can still estimate a covariance, but I guess the correlation would be close to 1.
$endgroup$
– Jonas
Jan 9 at 7:20












$begingroup$
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the copula keyword on Wikipedia.
$endgroup$
– Xi'an
Jan 9 at 7:29




$begingroup$
My point is that the correlation is a linear measure of dependence depending on the parametrerisation of the distribution, And once the two Cauchy variates are turned into Gaussians, their correlation can be anything between -1 and 1. Check the copula keyword on Wikipedia.
$endgroup$
– Xi'an
Jan 9 at 7:29


















draft saved

draft discarded




















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f386036%2fquantifying-dependence-of-cauchy-random-variables%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

A Topological Invariant for $pi_3(U(n))$