Given the cumulative distribution function find a random variable that has this distribution.












2












$begingroup$


We are given a Cumulative distribution function $(CDF)$ how do we find a random variable that has the given distribution? Since there could be a lot of such variables we are to find anyone given that the function is differentiable and increasing.



I read somewhere that we could simply take inverse of the function(if it exists) and that would be the Random variable you are looking for, but I don't understand it, even if it's true(?).



If that's true, why? Else, how do we find a random variable for the distribution?










share|cite|improve this question









$endgroup$












  • $begingroup$
    I see that the answers discuss prob integral transform (en.wikipedia.org/wiki/Probability_integral_transform), which answers/clarifies the second paragraph of the question of generating a rv $X$ that has the given CDF from a standard uniform rv. But I don't know what the question in the title given the cumulative distribution function find a random variable that has this distribution means since if a cdf $F(x)$ is given for some random variable $X$ then for any $a$ and $b$ the probability $P(a < X leq b)$ is nailed down by $F(b)-F(a)$ so $F(x)$ does indeed specify the rv a.e.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 16:16












  • $begingroup$
    @Just_to_Answer Obviously F(x) is a random variable, but it's distribution isn't the same as F(x).
    $endgroup$
    – aroma
    Aug 8 '17 at 17:31










  • $begingroup$
    $F(x)$ by convention is not a random variable. It is a real valued function $mathbb R rightarrow [0,1]$ defined by $F(x) = P(X leq x)$. So my point is specifying $F(x)$ is a.e. same as specifying $X$.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 17:41










  • $begingroup$
    @Just_to_Answer I don't understand your point, even if $F(x)$ is a r.v. how does it help in finding a r.v. whose CDF is $F(x)$. Since CDF of the r.v. $F(x)$ is not necessarily $F(x)$.
    $endgroup$
    – aroma
    Aug 8 '17 at 17:45












  • $begingroup$
    Let me try it differently. You give me a valid CDF $F(x)$ and say this is the CDF of some rv $X$. Using just that $F(x)$ and nothing else, I can compute any probabilities, moments, etc.etc. involving the rv $X$.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 17:52
















2












$begingroup$


We are given a Cumulative distribution function $(CDF)$ how do we find a random variable that has the given distribution? Since there could be a lot of such variables we are to find anyone given that the function is differentiable and increasing.



I read somewhere that we could simply take inverse of the function(if it exists) and that would be the Random variable you are looking for, but I don't understand it, even if it's true(?).



If that's true, why? Else, how do we find a random variable for the distribution?










share|cite|improve this question









$endgroup$












  • $begingroup$
    I see that the answers discuss prob integral transform (en.wikipedia.org/wiki/Probability_integral_transform), which answers/clarifies the second paragraph of the question of generating a rv $X$ that has the given CDF from a standard uniform rv. But I don't know what the question in the title given the cumulative distribution function find a random variable that has this distribution means since if a cdf $F(x)$ is given for some random variable $X$ then for any $a$ and $b$ the probability $P(a < X leq b)$ is nailed down by $F(b)-F(a)$ so $F(x)$ does indeed specify the rv a.e.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 16:16












  • $begingroup$
    @Just_to_Answer Obviously F(x) is a random variable, but it's distribution isn't the same as F(x).
    $endgroup$
    – aroma
    Aug 8 '17 at 17:31










  • $begingroup$
    $F(x)$ by convention is not a random variable. It is a real valued function $mathbb R rightarrow [0,1]$ defined by $F(x) = P(X leq x)$. So my point is specifying $F(x)$ is a.e. same as specifying $X$.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 17:41










  • $begingroup$
    @Just_to_Answer I don't understand your point, even if $F(x)$ is a r.v. how does it help in finding a r.v. whose CDF is $F(x)$. Since CDF of the r.v. $F(x)$ is not necessarily $F(x)$.
    $endgroup$
    – aroma
    Aug 8 '17 at 17:45












  • $begingroup$
    Let me try it differently. You give me a valid CDF $F(x)$ and say this is the CDF of some rv $X$. Using just that $F(x)$ and nothing else, I can compute any probabilities, moments, etc.etc. involving the rv $X$.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 17:52














2












2








2


1



$begingroup$


We are given a Cumulative distribution function $(CDF)$ how do we find a random variable that has the given distribution? Since there could be a lot of such variables we are to find anyone given that the function is differentiable and increasing.



I read somewhere that we could simply take inverse of the function(if it exists) and that would be the Random variable you are looking for, but I don't understand it, even if it's true(?).



If that's true, why? Else, how do we find a random variable for the distribution?










share|cite|improve this question









$endgroup$




We are given a Cumulative distribution function $(CDF)$ how do we find a random variable that has the given distribution? Since there could be a lot of such variables we are to find anyone given that the function is differentiable and increasing.



I read somewhere that we could simply take inverse of the function(if it exists) and that would be the Random variable you are looking for, but I don't understand it, even if it's true(?).



If that's true, why? Else, how do we find a random variable for the distribution?







probability statistics probability-distributions random-variables






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Aug 8 '17 at 12:38









aromaaroma

217211




217211












  • $begingroup$
    I see that the answers discuss prob integral transform (en.wikipedia.org/wiki/Probability_integral_transform), which answers/clarifies the second paragraph of the question of generating a rv $X$ that has the given CDF from a standard uniform rv. But I don't know what the question in the title given the cumulative distribution function find a random variable that has this distribution means since if a cdf $F(x)$ is given for some random variable $X$ then for any $a$ and $b$ the probability $P(a < X leq b)$ is nailed down by $F(b)-F(a)$ so $F(x)$ does indeed specify the rv a.e.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 16:16












  • $begingroup$
    @Just_to_Answer Obviously F(x) is a random variable, but it's distribution isn't the same as F(x).
    $endgroup$
    – aroma
    Aug 8 '17 at 17:31










  • $begingroup$
    $F(x)$ by convention is not a random variable. It is a real valued function $mathbb R rightarrow [0,1]$ defined by $F(x) = P(X leq x)$. So my point is specifying $F(x)$ is a.e. same as specifying $X$.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 17:41










  • $begingroup$
    @Just_to_Answer I don't understand your point, even if $F(x)$ is a r.v. how does it help in finding a r.v. whose CDF is $F(x)$. Since CDF of the r.v. $F(x)$ is not necessarily $F(x)$.
    $endgroup$
    – aroma
    Aug 8 '17 at 17:45












  • $begingroup$
    Let me try it differently. You give me a valid CDF $F(x)$ and say this is the CDF of some rv $X$. Using just that $F(x)$ and nothing else, I can compute any probabilities, moments, etc.etc. involving the rv $X$.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 17:52


















  • $begingroup$
    I see that the answers discuss prob integral transform (en.wikipedia.org/wiki/Probability_integral_transform), which answers/clarifies the second paragraph of the question of generating a rv $X$ that has the given CDF from a standard uniform rv. But I don't know what the question in the title given the cumulative distribution function find a random variable that has this distribution means since if a cdf $F(x)$ is given for some random variable $X$ then for any $a$ and $b$ the probability $P(a < X leq b)$ is nailed down by $F(b)-F(a)$ so $F(x)$ does indeed specify the rv a.e.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 16:16












  • $begingroup$
    @Just_to_Answer Obviously F(x) is a random variable, but it's distribution isn't the same as F(x).
    $endgroup$
    – aroma
    Aug 8 '17 at 17:31










  • $begingroup$
    $F(x)$ by convention is not a random variable. It is a real valued function $mathbb R rightarrow [0,1]$ defined by $F(x) = P(X leq x)$. So my point is specifying $F(x)$ is a.e. same as specifying $X$.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 17:41










  • $begingroup$
    @Just_to_Answer I don't understand your point, even if $F(x)$ is a r.v. how does it help in finding a r.v. whose CDF is $F(x)$. Since CDF of the r.v. $F(x)$ is not necessarily $F(x)$.
    $endgroup$
    – aroma
    Aug 8 '17 at 17:45












  • $begingroup$
    Let me try it differently. You give me a valid CDF $F(x)$ and say this is the CDF of some rv $X$. Using just that $F(x)$ and nothing else, I can compute any probabilities, moments, etc.etc. involving the rv $X$.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 17:52
















$begingroup$
I see that the answers discuss prob integral transform (en.wikipedia.org/wiki/Probability_integral_transform), which answers/clarifies the second paragraph of the question of generating a rv $X$ that has the given CDF from a standard uniform rv. But I don't know what the question in the title given the cumulative distribution function find a random variable that has this distribution means since if a cdf $F(x)$ is given for some random variable $X$ then for any $a$ and $b$ the probability $P(a < X leq b)$ is nailed down by $F(b)-F(a)$ so $F(x)$ does indeed specify the rv a.e.
$endgroup$
– Just_to_Answer
Aug 8 '17 at 16:16






$begingroup$
I see that the answers discuss prob integral transform (en.wikipedia.org/wiki/Probability_integral_transform), which answers/clarifies the second paragraph of the question of generating a rv $X$ that has the given CDF from a standard uniform rv. But I don't know what the question in the title given the cumulative distribution function find a random variable that has this distribution means since if a cdf $F(x)$ is given for some random variable $X$ then for any $a$ and $b$ the probability $P(a < X leq b)$ is nailed down by $F(b)-F(a)$ so $F(x)$ does indeed specify the rv a.e.
$endgroup$
– Just_to_Answer
Aug 8 '17 at 16:16














$begingroup$
@Just_to_Answer Obviously F(x) is a random variable, but it's distribution isn't the same as F(x).
$endgroup$
– aroma
Aug 8 '17 at 17:31




$begingroup$
@Just_to_Answer Obviously F(x) is a random variable, but it's distribution isn't the same as F(x).
$endgroup$
– aroma
Aug 8 '17 at 17:31












$begingroup$
$F(x)$ by convention is not a random variable. It is a real valued function $mathbb R rightarrow [0,1]$ defined by $F(x) = P(X leq x)$. So my point is specifying $F(x)$ is a.e. same as specifying $X$.
$endgroup$
– Just_to_Answer
Aug 8 '17 at 17:41




$begingroup$
$F(x)$ by convention is not a random variable. It is a real valued function $mathbb R rightarrow [0,1]$ defined by $F(x) = P(X leq x)$. So my point is specifying $F(x)$ is a.e. same as specifying $X$.
$endgroup$
– Just_to_Answer
Aug 8 '17 at 17:41












$begingroup$
@Just_to_Answer I don't understand your point, even if $F(x)$ is a r.v. how does it help in finding a r.v. whose CDF is $F(x)$. Since CDF of the r.v. $F(x)$ is not necessarily $F(x)$.
$endgroup$
– aroma
Aug 8 '17 at 17:45






$begingroup$
@Just_to_Answer I don't understand your point, even if $F(x)$ is a r.v. how does it help in finding a r.v. whose CDF is $F(x)$. Since CDF of the r.v. $F(x)$ is not necessarily $F(x)$.
$endgroup$
– aroma
Aug 8 '17 at 17:45














$begingroup$
Let me try it differently. You give me a valid CDF $F(x)$ and say this is the CDF of some rv $X$. Using just that $F(x)$ and nothing else, I can compute any probabilities, moments, etc.etc. involving the rv $X$.
$endgroup$
– Just_to_Answer
Aug 8 '17 at 17:52




$begingroup$
Let me try it differently. You give me a valid CDF $F(x)$ and say this is the CDF of some rv $X$. Using just that $F(x)$ and nothing else, I can compute any probabilities, moments, etc.etc. involving the rv $X$.
$endgroup$
– Just_to_Answer
Aug 8 '17 at 17:52










4 Answers
4






active

oldest

votes


















0












$begingroup$

Let $X=g^{-1}(Y)$ where $Y$ is uniform on $[0,1]$ and $g(cdot)$ is strictly increasing (and hence invertible).



Then $$F_X(x)=P(Xle x)=P(g^{-1}(Y) le x) = P(Y le g(x))=g(x) $$






share|cite|improve this answer









$endgroup$













  • $begingroup$
    This looks mathematically fine, but is there way to intuitively understand this?
    $endgroup$
    – aroma
    Aug 8 '17 at 13:01










  • $begingroup$
    Perhaps it's a little easier to think it the other way math.stackexchange.com/questions/868400/…
    $endgroup$
    – leonbloy
    Aug 8 '17 at 13:06










  • $begingroup$
    One intuitive way to think about the concept above (inverse transform sampling) is this: Visualize the graph of your CDF which has the range $(0,1)$. Imagine picking a point randomly in the range $(0,1)$ on the vertical axis, then going horizontally until you hit the CDF, then dropping to the horizontal axis, and picking where you landed to be your rv's realization. Think about doing this over and over. In the long run, think about the distribution of the realizations you will get.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 21:52



















0












$begingroup$

Let $U$ be a random variable with uniform distribution on $(0,1)$.



Prescribe $Phi:(0,1)tomathbb R$ by:$$umapstoinf{xinmathbb Rmid uleq F(x)}$$



Then it can be shown that:$$Phi(u)leq xiff uleq F(x)$$



So letting $U$ be a random variable with uniform distribution on $(0,1)$ we have:$$Phi(U)leq xiff Uleq F(x)$$and consequently:$$F_{Phi(U)}(x)=P(Phi(u)leq x)=P(Uleq F(x))=F(x)$$



This always works.



In the special case where $F$ has an inverse we find that $Phi$ is actually that inverse.






share|cite|improve this answer









$endgroup$





















    0












    $begingroup$

    If you are studying elementary probability theory, allow me to reformulate your question as "how can I represent a random variable $X$ with a given CDF $F_X$ in terms of a uniform random variable $U$ on $(0,1)$?" The answer to that is the quantile function: you define



    $$G_X(p)=inf { x : F_X(x) geq p }$$
    and then define $X$ to be $G_X(U)$.



    Note that if $F_X$ is invertible then $G_X=F_X^{-1}$, otherwise this is "the right generalization". One can see this by looking at the discrete case: if $P(X=x)=p$ then $P(G_X(U)=x)=p$. This is because a jump of height $p$ in $F_X$ corresponds to a flat region of length $p$ in $G_X$, and the uniform distribution on $(0,1)$ assigns each interval a probability equal to its length.



    The natural question is now "what's a uniform random variable on $(0,1)$?" Well, it has $F_U(x)=begin{cases} 0 & x<0 \ x & x in [0,1] \ 1 & x>1 end{cases}$. But otherwise such a thing is a black box from the elementary point of view.



    If you are studying measure-theoretic probability theory then the answer is a bit more explicit. A random variable with CDF $F_X$ is given by $G_X : Omega to mathbb{R}$ where $G_X$ is the quantile function as defined before, $Omega=(0,1)$, $mathcal{F}$ is the Borel $sigma$-algebra on $(0,1)$, and $mathbb{P}$ is the Lebesgue measure. Note that on this space the identity function is a uniform random variable on $(0,1)$, so this is really the same construction as the one described above.



    In any case these constructions can be generalized to finitely many random variables by looking at the uniform distribution on $(0,1)^n$ instead of $(0,1)$.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      Would the downvoter care to comment? My guess would be that you are either criticizing the first paragraph or the longwindedness of the rest of it, but I am curious to hear your particular criticism, especially as it apparently does not pertain to any of the other answers.
      $endgroup$
      – Ian
      Aug 8 '17 at 13:57





















    -1












    $begingroup$

    The $text{cdf}$ of a random variable tells the probability that its value falls below a given bound,



    $$text{cdf}_X(x)=mathbb P(X<x).$$



    For a uniform variable $U$, we have



    $$text{cdf}_U(x)=mathbb P(U<x)=x$$ for $xin[0,1]$.



    Now if you want to create a random variable with an imposed $text{cdf}$, let $h(x)$, you want to achieve



    $$mathbb P(X<x)=h(x).$$



    If $X$ derives from a uniform variable via some transformation, let $X=g(U)$, you have



    $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x).$$



    But also, assuming $g$ invertible, and using the fact that $U$ is uniform,



    $$mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=g^{-1}(x).$$



    Putting these facts together,



    $$g^{-1}(x)=h(x)$$ or $$g(x)=h^{-1}(x).$$





    For a simple example, take a negative exponential distribution,



    $$text{cdf}_X=h(x)=1-e^{-x}.$$



    Then by inversion,



    $$g(x)=-ln(1-x)$$ which maps $0$ to $0$ and $1$ to $infty$. By looking at the plot of $g$, you will understand that the "density" of the transformed points goes decreasing, because the slope increases, resulting in a bias in favor of the small values.



    enter image description here





    We can even play the exercise when $U$ is not uniform but has a known $text{cdf}$, let $f(x)$. Then it suffices to adapt the equations, and



    $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x)=mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=f(g^{-1})(x)$$ and we need



    $$g(x)=h^{-1}(f(x)).$$






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      The right-continuous convention for CDFs is much more common these days than the left-continuous one.
      $endgroup$
      – Ian
      Aug 8 '17 at 13:19












    • $begingroup$
      @Ian: yep but this is irrelevant to the present discussion.
      $endgroup$
      – Yves Daoust
      Aug 8 '17 at 13:30






    • 1




      $begingroup$
      Well...sort of. Of course everything turns out the same in either convention, but some definitions are altered in ways that might not be obvious. For example the quantile function is defined through a sup rather than an inf.
      $endgroup$
      – Ian
      Aug 8 '17 at 13:32













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2386644%2fgiven-the-cumulative-distribution-function-find-a-random-variable-that-has-this%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    Let $X=g^{-1}(Y)$ where $Y$ is uniform on $[0,1]$ and $g(cdot)$ is strictly increasing (and hence invertible).



    Then $$F_X(x)=P(Xle x)=P(g^{-1}(Y) le x) = P(Y le g(x))=g(x) $$






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      This looks mathematically fine, but is there way to intuitively understand this?
      $endgroup$
      – aroma
      Aug 8 '17 at 13:01










    • $begingroup$
      Perhaps it's a little easier to think it the other way math.stackexchange.com/questions/868400/…
      $endgroup$
      – leonbloy
      Aug 8 '17 at 13:06










    • $begingroup$
      One intuitive way to think about the concept above (inverse transform sampling) is this: Visualize the graph of your CDF which has the range $(0,1)$. Imagine picking a point randomly in the range $(0,1)$ on the vertical axis, then going horizontally until you hit the CDF, then dropping to the horizontal axis, and picking where you landed to be your rv's realization. Think about doing this over and over. In the long run, think about the distribution of the realizations you will get.
      $endgroup$
      – Just_to_Answer
      Aug 8 '17 at 21:52
















    0












    $begingroup$

    Let $X=g^{-1}(Y)$ where $Y$ is uniform on $[0,1]$ and $g(cdot)$ is strictly increasing (and hence invertible).



    Then $$F_X(x)=P(Xle x)=P(g^{-1}(Y) le x) = P(Y le g(x))=g(x) $$






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      This looks mathematically fine, but is there way to intuitively understand this?
      $endgroup$
      – aroma
      Aug 8 '17 at 13:01










    • $begingroup$
      Perhaps it's a little easier to think it the other way math.stackexchange.com/questions/868400/…
      $endgroup$
      – leonbloy
      Aug 8 '17 at 13:06










    • $begingroup$
      One intuitive way to think about the concept above (inverse transform sampling) is this: Visualize the graph of your CDF which has the range $(0,1)$. Imagine picking a point randomly in the range $(0,1)$ on the vertical axis, then going horizontally until you hit the CDF, then dropping to the horizontal axis, and picking where you landed to be your rv's realization. Think about doing this over and over. In the long run, think about the distribution of the realizations you will get.
      $endgroup$
      – Just_to_Answer
      Aug 8 '17 at 21:52














    0












    0








    0





    $begingroup$

    Let $X=g^{-1}(Y)$ where $Y$ is uniform on $[0,1]$ and $g(cdot)$ is strictly increasing (and hence invertible).



    Then $$F_X(x)=P(Xle x)=P(g^{-1}(Y) le x) = P(Y le g(x))=g(x) $$






    share|cite|improve this answer









    $endgroup$



    Let $X=g^{-1}(Y)$ where $Y$ is uniform on $[0,1]$ and $g(cdot)$ is strictly increasing (and hence invertible).



    Then $$F_X(x)=P(Xle x)=P(g^{-1}(Y) le x) = P(Y le g(x))=g(x) $$







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Aug 8 '17 at 12:58









    leonbloyleonbloy

    41.6k647108




    41.6k647108












    • $begingroup$
      This looks mathematically fine, but is there way to intuitively understand this?
      $endgroup$
      – aroma
      Aug 8 '17 at 13:01










    • $begingroup$
      Perhaps it's a little easier to think it the other way math.stackexchange.com/questions/868400/…
      $endgroup$
      – leonbloy
      Aug 8 '17 at 13:06










    • $begingroup$
      One intuitive way to think about the concept above (inverse transform sampling) is this: Visualize the graph of your CDF which has the range $(0,1)$. Imagine picking a point randomly in the range $(0,1)$ on the vertical axis, then going horizontally until you hit the CDF, then dropping to the horizontal axis, and picking where you landed to be your rv's realization. Think about doing this over and over. In the long run, think about the distribution of the realizations you will get.
      $endgroup$
      – Just_to_Answer
      Aug 8 '17 at 21:52


















    • $begingroup$
      This looks mathematically fine, but is there way to intuitively understand this?
      $endgroup$
      – aroma
      Aug 8 '17 at 13:01










    • $begingroup$
      Perhaps it's a little easier to think it the other way math.stackexchange.com/questions/868400/…
      $endgroup$
      – leonbloy
      Aug 8 '17 at 13:06










    • $begingroup$
      One intuitive way to think about the concept above (inverse transform sampling) is this: Visualize the graph of your CDF which has the range $(0,1)$. Imagine picking a point randomly in the range $(0,1)$ on the vertical axis, then going horizontally until you hit the CDF, then dropping to the horizontal axis, and picking where you landed to be your rv's realization. Think about doing this over and over. In the long run, think about the distribution of the realizations you will get.
      $endgroup$
      – Just_to_Answer
      Aug 8 '17 at 21:52
















    $begingroup$
    This looks mathematically fine, but is there way to intuitively understand this?
    $endgroup$
    – aroma
    Aug 8 '17 at 13:01




    $begingroup$
    This looks mathematically fine, but is there way to intuitively understand this?
    $endgroup$
    – aroma
    Aug 8 '17 at 13:01












    $begingroup$
    Perhaps it's a little easier to think it the other way math.stackexchange.com/questions/868400/…
    $endgroup$
    – leonbloy
    Aug 8 '17 at 13:06




    $begingroup$
    Perhaps it's a little easier to think it the other way math.stackexchange.com/questions/868400/…
    $endgroup$
    – leonbloy
    Aug 8 '17 at 13:06












    $begingroup$
    One intuitive way to think about the concept above (inverse transform sampling) is this: Visualize the graph of your CDF which has the range $(0,1)$. Imagine picking a point randomly in the range $(0,1)$ on the vertical axis, then going horizontally until you hit the CDF, then dropping to the horizontal axis, and picking where you landed to be your rv's realization. Think about doing this over and over. In the long run, think about the distribution of the realizations you will get.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 21:52




    $begingroup$
    One intuitive way to think about the concept above (inverse transform sampling) is this: Visualize the graph of your CDF which has the range $(0,1)$. Imagine picking a point randomly in the range $(0,1)$ on the vertical axis, then going horizontally until you hit the CDF, then dropping to the horizontal axis, and picking where you landed to be your rv's realization. Think about doing this over and over. In the long run, think about the distribution of the realizations you will get.
    $endgroup$
    – Just_to_Answer
    Aug 8 '17 at 21:52











    0












    $begingroup$

    Let $U$ be a random variable with uniform distribution on $(0,1)$.



    Prescribe $Phi:(0,1)tomathbb R$ by:$$umapstoinf{xinmathbb Rmid uleq F(x)}$$



    Then it can be shown that:$$Phi(u)leq xiff uleq F(x)$$



    So letting $U$ be a random variable with uniform distribution on $(0,1)$ we have:$$Phi(U)leq xiff Uleq F(x)$$and consequently:$$F_{Phi(U)}(x)=P(Phi(u)leq x)=P(Uleq F(x))=F(x)$$



    This always works.



    In the special case where $F$ has an inverse we find that $Phi$ is actually that inverse.






    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      Let $U$ be a random variable with uniform distribution on $(0,1)$.



      Prescribe $Phi:(0,1)tomathbb R$ by:$$umapstoinf{xinmathbb Rmid uleq F(x)}$$



      Then it can be shown that:$$Phi(u)leq xiff uleq F(x)$$



      So letting $U$ be a random variable with uniform distribution on $(0,1)$ we have:$$Phi(U)leq xiff Uleq F(x)$$and consequently:$$F_{Phi(U)}(x)=P(Phi(u)leq x)=P(Uleq F(x))=F(x)$$



      This always works.



      In the special case where $F$ has an inverse we find that $Phi$ is actually that inverse.






      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        Let $U$ be a random variable with uniform distribution on $(0,1)$.



        Prescribe $Phi:(0,1)tomathbb R$ by:$$umapstoinf{xinmathbb Rmid uleq F(x)}$$



        Then it can be shown that:$$Phi(u)leq xiff uleq F(x)$$



        So letting $U$ be a random variable with uniform distribution on $(0,1)$ we have:$$Phi(U)leq xiff Uleq F(x)$$and consequently:$$F_{Phi(U)}(x)=P(Phi(u)leq x)=P(Uleq F(x))=F(x)$$



        This always works.



        In the special case where $F$ has an inverse we find that $Phi$ is actually that inverse.






        share|cite|improve this answer









        $endgroup$



        Let $U$ be a random variable with uniform distribution on $(0,1)$.



        Prescribe $Phi:(0,1)tomathbb R$ by:$$umapstoinf{xinmathbb Rmid uleq F(x)}$$



        Then it can be shown that:$$Phi(u)leq xiff uleq F(x)$$



        So letting $U$ be a random variable with uniform distribution on $(0,1)$ we have:$$Phi(U)leq xiff Uleq F(x)$$and consequently:$$F_{Phi(U)}(x)=P(Phi(u)leq x)=P(Uleq F(x))=F(x)$$



        This always works.



        In the special case where $F$ has an inverse we find that $Phi$ is actually that inverse.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Aug 8 '17 at 13:05









        drhabdrhab

        103k545136




        103k545136























            0












            $begingroup$

            If you are studying elementary probability theory, allow me to reformulate your question as "how can I represent a random variable $X$ with a given CDF $F_X$ in terms of a uniform random variable $U$ on $(0,1)$?" The answer to that is the quantile function: you define



            $$G_X(p)=inf { x : F_X(x) geq p }$$
            and then define $X$ to be $G_X(U)$.



            Note that if $F_X$ is invertible then $G_X=F_X^{-1}$, otherwise this is "the right generalization". One can see this by looking at the discrete case: if $P(X=x)=p$ then $P(G_X(U)=x)=p$. This is because a jump of height $p$ in $F_X$ corresponds to a flat region of length $p$ in $G_X$, and the uniform distribution on $(0,1)$ assigns each interval a probability equal to its length.



            The natural question is now "what's a uniform random variable on $(0,1)$?" Well, it has $F_U(x)=begin{cases} 0 & x<0 \ x & x in [0,1] \ 1 & x>1 end{cases}$. But otherwise such a thing is a black box from the elementary point of view.



            If you are studying measure-theoretic probability theory then the answer is a bit more explicit. A random variable with CDF $F_X$ is given by $G_X : Omega to mathbb{R}$ where $G_X$ is the quantile function as defined before, $Omega=(0,1)$, $mathcal{F}$ is the Borel $sigma$-algebra on $(0,1)$, and $mathbb{P}$ is the Lebesgue measure. Note that on this space the identity function is a uniform random variable on $(0,1)$, so this is really the same construction as the one described above.



            In any case these constructions can be generalized to finitely many random variables by looking at the uniform distribution on $(0,1)^n$ instead of $(0,1)$.






            share|cite|improve this answer











            $endgroup$









            • 1




              $begingroup$
              Would the downvoter care to comment? My guess would be that you are either criticizing the first paragraph or the longwindedness of the rest of it, but I am curious to hear your particular criticism, especially as it apparently does not pertain to any of the other answers.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:57


















            0












            $begingroup$

            If you are studying elementary probability theory, allow me to reformulate your question as "how can I represent a random variable $X$ with a given CDF $F_X$ in terms of a uniform random variable $U$ on $(0,1)$?" The answer to that is the quantile function: you define



            $$G_X(p)=inf { x : F_X(x) geq p }$$
            and then define $X$ to be $G_X(U)$.



            Note that if $F_X$ is invertible then $G_X=F_X^{-1}$, otherwise this is "the right generalization". One can see this by looking at the discrete case: if $P(X=x)=p$ then $P(G_X(U)=x)=p$. This is because a jump of height $p$ in $F_X$ corresponds to a flat region of length $p$ in $G_X$, and the uniform distribution on $(0,1)$ assigns each interval a probability equal to its length.



            The natural question is now "what's a uniform random variable on $(0,1)$?" Well, it has $F_U(x)=begin{cases} 0 & x<0 \ x & x in [0,1] \ 1 & x>1 end{cases}$. But otherwise such a thing is a black box from the elementary point of view.



            If you are studying measure-theoretic probability theory then the answer is a bit more explicit. A random variable with CDF $F_X$ is given by $G_X : Omega to mathbb{R}$ where $G_X$ is the quantile function as defined before, $Omega=(0,1)$, $mathcal{F}$ is the Borel $sigma$-algebra on $(0,1)$, and $mathbb{P}$ is the Lebesgue measure. Note that on this space the identity function is a uniform random variable on $(0,1)$, so this is really the same construction as the one described above.



            In any case these constructions can be generalized to finitely many random variables by looking at the uniform distribution on $(0,1)^n$ instead of $(0,1)$.






            share|cite|improve this answer











            $endgroup$









            • 1




              $begingroup$
              Would the downvoter care to comment? My guess would be that you are either criticizing the first paragraph or the longwindedness of the rest of it, but I am curious to hear your particular criticism, especially as it apparently does not pertain to any of the other answers.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:57
















            0












            0








            0





            $begingroup$

            If you are studying elementary probability theory, allow me to reformulate your question as "how can I represent a random variable $X$ with a given CDF $F_X$ in terms of a uniform random variable $U$ on $(0,1)$?" The answer to that is the quantile function: you define



            $$G_X(p)=inf { x : F_X(x) geq p }$$
            and then define $X$ to be $G_X(U)$.



            Note that if $F_X$ is invertible then $G_X=F_X^{-1}$, otherwise this is "the right generalization". One can see this by looking at the discrete case: if $P(X=x)=p$ then $P(G_X(U)=x)=p$. This is because a jump of height $p$ in $F_X$ corresponds to a flat region of length $p$ in $G_X$, and the uniform distribution on $(0,1)$ assigns each interval a probability equal to its length.



            The natural question is now "what's a uniform random variable on $(0,1)$?" Well, it has $F_U(x)=begin{cases} 0 & x<0 \ x & x in [0,1] \ 1 & x>1 end{cases}$. But otherwise such a thing is a black box from the elementary point of view.



            If you are studying measure-theoretic probability theory then the answer is a bit more explicit. A random variable with CDF $F_X$ is given by $G_X : Omega to mathbb{R}$ where $G_X$ is the quantile function as defined before, $Omega=(0,1)$, $mathcal{F}$ is the Borel $sigma$-algebra on $(0,1)$, and $mathbb{P}$ is the Lebesgue measure. Note that on this space the identity function is a uniform random variable on $(0,1)$, so this is really the same construction as the one described above.



            In any case these constructions can be generalized to finitely many random variables by looking at the uniform distribution on $(0,1)^n$ instead of $(0,1)$.






            share|cite|improve this answer











            $endgroup$



            If you are studying elementary probability theory, allow me to reformulate your question as "how can I represent a random variable $X$ with a given CDF $F_X$ in terms of a uniform random variable $U$ on $(0,1)$?" The answer to that is the quantile function: you define



            $$G_X(p)=inf { x : F_X(x) geq p }$$
            and then define $X$ to be $G_X(U)$.



            Note that if $F_X$ is invertible then $G_X=F_X^{-1}$, otherwise this is "the right generalization". One can see this by looking at the discrete case: if $P(X=x)=p$ then $P(G_X(U)=x)=p$. This is because a jump of height $p$ in $F_X$ corresponds to a flat region of length $p$ in $G_X$, and the uniform distribution on $(0,1)$ assigns each interval a probability equal to its length.



            The natural question is now "what's a uniform random variable on $(0,1)$?" Well, it has $F_U(x)=begin{cases} 0 & x<0 \ x & x in [0,1] \ 1 & x>1 end{cases}$. But otherwise such a thing is a black box from the elementary point of view.



            If you are studying measure-theoretic probability theory then the answer is a bit more explicit. A random variable with CDF $F_X$ is given by $G_X : Omega to mathbb{R}$ where $G_X$ is the quantile function as defined before, $Omega=(0,1)$, $mathcal{F}$ is the Borel $sigma$-algebra on $(0,1)$, and $mathbb{P}$ is the Lebesgue measure. Note that on this space the identity function is a uniform random variable on $(0,1)$, so this is really the same construction as the one described above.



            In any case these constructions can be generalized to finitely many random variables by looking at the uniform distribution on $(0,1)^n$ instead of $(0,1)$.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Sep 18 '17 at 15:55

























            answered Aug 8 '17 at 12:43









            IanIan

            68.8k25391




            68.8k25391








            • 1




              $begingroup$
              Would the downvoter care to comment? My guess would be that you are either criticizing the first paragraph or the longwindedness of the rest of it, but I am curious to hear your particular criticism, especially as it apparently does not pertain to any of the other answers.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:57
















            • 1




              $begingroup$
              Would the downvoter care to comment? My guess would be that you are either criticizing the first paragraph or the longwindedness of the rest of it, but I am curious to hear your particular criticism, especially as it apparently does not pertain to any of the other answers.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:57










            1




            1




            $begingroup$
            Would the downvoter care to comment? My guess would be that you are either criticizing the first paragraph or the longwindedness of the rest of it, but I am curious to hear your particular criticism, especially as it apparently does not pertain to any of the other answers.
            $endgroup$
            – Ian
            Aug 8 '17 at 13:57






            $begingroup$
            Would the downvoter care to comment? My guess would be that you are either criticizing the first paragraph or the longwindedness of the rest of it, but I am curious to hear your particular criticism, especially as it apparently does not pertain to any of the other answers.
            $endgroup$
            – Ian
            Aug 8 '17 at 13:57













            -1












            $begingroup$

            The $text{cdf}$ of a random variable tells the probability that its value falls below a given bound,



            $$text{cdf}_X(x)=mathbb P(X<x).$$



            For a uniform variable $U$, we have



            $$text{cdf}_U(x)=mathbb P(U<x)=x$$ for $xin[0,1]$.



            Now if you want to create a random variable with an imposed $text{cdf}$, let $h(x)$, you want to achieve



            $$mathbb P(X<x)=h(x).$$



            If $X$ derives from a uniform variable via some transformation, let $X=g(U)$, you have



            $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x).$$



            But also, assuming $g$ invertible, and using the fact that $U$ is uniform,



            $$mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=g^{-1}(x).$$



            Putting these facts together,



            $$g^{-1}(x)=h(x)$$ or $$g(x)=h^{-1}(x).$$





            For a simple example, take a negative exponential distribution,



            $$text{cdf}_X=h(x)=1-e^{-x}.$$



            Then by inversion,



            $$g(x)=-ln(1-x)$$ which maps $0$ to $0$ and $1$ to $infty$. By looking at the plot of $g$, you will understand that the "density" of the transformed points goes decreasing, because the slope increases, resulting in a bias in favor of the small values.



            enter image description here





            We can even play the exercise when $U$ is not uniform but has a known $text{cdf}$, let $f(x)$. Then it suffices to adapt the equations, and



            $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x)=mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=f(g^{-1})(x)$$ and we need



            $$g(x)=h^{-1}(f(x)).$$






            share|cite|improve this answer











            $endgroup$









            • 1




              $begingroup$
              The right-continuous convention for CDFs is much more common these days than the left-continuous one.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:19












            • $begingroup$
              @Ian: yep but this is irrelevant to the present discussion.
              $endgroup$
              – Yves Daoust
              Aug 8 '17 at 13:30






            • 1




              $begingroup$
              Well...sort of. Of course everything turns out the same in either convention, but some definitions are altered in ways that might not be obvious. For example the quantile function is defined through a sup rather than an inf.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:32


















            -1












            $begingroup$

            The $text{cdf}$ of a random variable tells the probability that its value falls below a given bound,



            $$text{cdf}_X(x)=mathbb P(X<x).$$



            For a uniform variable $U$, we have



            $$text{cdf}_U(x)=mathbb P(U<x)=x$$ for $xin[0,1]$.



            Now if you want to create a random variable with an imposed $text{cdf}$, let $h(x)$, you want to achieve



            $$mathbb P(X<x)=h(x).$$



            If $X$ derives from a uniform variable via some transformation, let $X=g(U)$, you have



            $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x).$$



            But also, assuming $g$ invertible, and using the fact that $U$ is uniform,



            $$mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=g^{-1}(x).$$



            Putting these facts together,



            $$g^{-1}(x)=h(x)$$ or $$g(x)=h^{-1}(x).$$





            For a simple example, take a negative exponential distribution,



            $$text{cdf}_X=h(x)=1-e^{-x}.$$



            Then by inversion,



            $$g(x)=-ln(1-x)$$ which maps $0$ to $0$ and $1$ to $infty$. By looking at the plot of $g$, you will understand that the "density" of the transformed points goes decreasing, because the slope increases, resulting in a bias in favor of the small values.



            enter image description here





            We can even play the exercise when $U$ is not uniform but has a known $text{cdf}$, let $f(x)$. Then it suffices to adapt the equations, and



            $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x)=mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=f(g^{-1})(x)$$ and we need



            $$g(x)=h^{-1}(f(x)).$$






            share|cite|improve this answer











            $endgroup$









            • 1




              $begingroup$
              The right-continuous convention for CDFs is much more common these days than the left-continuous one.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:19












            • $begingroup$
              @Ian: yep but this is irrelevant to the present discussion.
              $endgroup$
              – Yves Daoust
              Aug 8 '17 at 13:30






            • 1




              $begingroup$
              Well...sort of. Of course everything turns out the same in either convention, but some definitions are altered in ways that might not be obvious. For example the quantile function is defined through a sup rather than an inf.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:32
















            -1












            -1








            -1





            $begingroup$

            The $text{cdf}$ of a random variable tells the probability that its value falls below a given bound,



            $$text{cdf}_X(x)=mathbb P(X<x).$$



            For a uniform variable $U$, we have



            $$text{cdf}_U(x)=mathbb P(U<x)=x$$ for $xin[0,1]$.



            Now if you want to create a random variable with an imposed $text{cdf}$, let $h(x)$, you want to achieve



            $$mathbb P(X<x)=h(x).$$



            If $X$ derives from a uniform variable via some transformation, let $X=g(U)$, you have



            $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x).$$



            But also, assuming $g$ invertible, and using the fact that $U$ is uniform,



            $$mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=g^{-1}(x).$$



            Putting these facts together,



            $$g^{-1}(x)=h(x)$$ or $$g(x)=h^{-1}(x).$$





            For a simple example, take a negative exponential distribution,



            $$text{cdf}_X=h(x)=1-e^{-x}.$$



            Then by inversion,



            $$g(x)=-ln(1-x)$$ which maps $0$ to $0$ and $1$ to $infty$. By looking at the plot of $g$, you will understand that the "density" of the transformed points goes decreasing, because the slope increases, resulting in a bias in favor of the small values.



            enter image description here





            We can even play the exercise when $U$ is not uniform but has a known $text{cdf}$, let $f(x)$. Then it suffices to adapt the equations, and



            $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x)=mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=f(g^{-1})(x)$$ and we need



            $$g(x)=h^{-1}(f(x)).$$






            share|cite|improve this answer











            $endgroup$



            The $text{cdf}$ of a random variable tells the probability that its value falls below a given bound,



            $$text{cdf}_X(x)=mathbb P(X<x).$$



            For a uniform variable $U$, we have



            $$text{cdf}_U(x)=mathbb P(U<x)=x$$ for $xin[0,1]$.



            Now if you want to create a random variable with an imposed $text{cdf}$, let $h(x)$, you want to achieve



            $$mathbb P(X<x)=h(x).$$



            If $X$ derives from a uniform variable via some transformation, let $X=g(U)$, you have



            $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x).$$



            But also, assuming $g$ invertible, and using the fact that $U$ is uniform,



            $$mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=g^{-1}(x).$$



            Putting these facts together,



            $$g^{-1}(x)=h(x)$$ or $$g(x)=h^{-1}(x).$$





            For a simple example, take a negative exponential distribution,



            $$text{cdf}_X=h(x)=1-e^{-x}.$$



            Then by inversion,



            $$g(x)=-ln(1-x)$$ which maps $0$ to $0$ and $1$ to $infty$. By looking at the plot of $g$, you will understand that the "density" of the transformed points goes decreasing, because the slope increases, resulting in a bias in favor of the small values.



            enter image description here





            We can even play the exercise when $U$ is not uniform but has a known $text{cdf}$, let $f(x)$. Then it suffices to adapt the equations, and



            $$mathbb P(X<x)=mathbb P(g(U)<x)=h(x)=mathbb P(g(U)<x)=mathbb P(U<g^{-1}(x))=f(g^{-1})(x)$$ and we need



            $$g(x)=h^{-1}(f(x)).$$







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Aug 8 '17 at 13:39

























            answered Aug 8 '17 at 13:17









            Yves DaoustYves Daoust

            130k676229




            130k676229








            • 1




              $begingroup$
              The right-continuous convention for CDFs is much more common these days than the left-continuous one.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:19












            • $begingroup$
              @Ian: yep but this is irrelevant to the present discussion.
              $endgroup$
              – Yves Daoust
              Aug 8 '17 at 13:30






            • 1




              $begingroup$
              Well...sort of. Of course everything turns out the same in either convention, but some definitions are altered in ways that might not be obvious. For example the quantile function is defined through a sup rather than an inf.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:32
















            • 1




              $begingroup$
              The right-continuous convention for CDFs is much more common these days than the left-continuous one.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:19












            • $begingroup$
              @Ian: yep but this is irrelevant to the present discussion.
              $endgroup$
              – Yves Daoust
              Aug 8 '17 at 13:30






            • 1




              $begingroup$
              Well...sort of. Of course everything turns out the same in either convention, but some definitions are altered in ways that might not be obvious. For example the quantile function is defined through a sup rather than an inf.
              $endgroup$
              – Ian
              Aug 8 '17 at 13:32










            1




            1




            $begingroup$
            The right-continuous convention for CDFs is much more common these days than the left-continuous one.
            $endgroup$
            – Ian
            Aug 8 '17 at 13:19






            $begingroup$
            The right-continuous convention for CDFs is much more common these days than the left-continuous one.
            $endgroup$
            – Ian
            Aug 8 '17 at 13:19














            $begingroup$
            @Ian: yep but this is irrelevant to the present discussion.
            $endgroup$
            – Yves Daoust
            Aug 8 '17 at 13:30




            $begingroup$
            @Ian: yep but this is irrelevant to the present discussion.
            $endgroup$
            – Yves Daoust
            Aug 8 '17 at 13:30




            1




            1




            $begingroup$
            Well...sort of. Of course everything turns out the same in either convention, but some definitions are altered in ways that might not be obvious. For example the quantile function is defined through a sup rather than an inf.
            $endgroup$
            – Ian
            Aug 8 '17 at 13:32






            $begingroup$
            Well...sort of. Of course everything turns out the same in either convention, but some definitions are altered in ways that might not be obvious. For example the quantile function is defined through a sup rather than an inf.
            $endgroup$
            – Ian
            Aug 8 '17 at 13:32




















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2386644%2fgiven-the-cumulative-distribution-function-find-a-random-variable-that-has-this%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            How to fix TextFormField cause rebuild widget in Flutter

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith