Alternative approaches to obtain the expected value of the geometric distribution












0












$begingroup$


Given that $X$ has geometric distribution with $p_{X}(x) = p(1-p)^{x-1}$, determine $textbf{E}(X)$.



MY ATTEMPT



begin{align*}
textbf{E}(X) = sum_{x=1}^{infty}xp(1-p)^{x-1} = psum_{x=1}^{infty}x(1-p)^{x-1}
end{align*}



If we denote by



begin{align*}
F(w) = sum_{k=1}^{infty} w^{k} = frac{w}{1 - w}quadtext{for}quad |w| < 1
end{align*}



We conclude that
begin{align*}
textbf{E}(X) = psum_{x=1}^{infty}x(1-p)^{x-1} = pF^{prime}(1-p)
end{align*}



Since $displaystyle F^{prime}(w) = frac{1}{(1-w)^{2}}$, it is now possible to obtain the desired result
begin{align*}
textbf{E}(X) = frac{p}{(1-(1-p))^{2}} = frac{1}{p}
end{align*}



In the case that my answer is correct, could someone provide me any other approach to this problem? I'd prefer solutions which do not involve sophisticated methods. Thanks in advance.










share|cite|improve this question











$endgroup$

















    0












    $begingroup$


    Given that $X$ has geometric distribution with $p_{X}(x) = p(1-p)^{x-1}$, determine $textbf{E}(X)$.



    MY ATTEMPT



    begin{align*}
    textbf{E}(X) = sum_{x=1}^{infty}xp(1-p)^{x-1} = psum_{x=1}^{infty}x(1-p)^{x-1}
    end{align*}



    If we denote by



    begin{align*}
    F(w) = sum_{k=1}^{infty} w^{k} = frac{w}{1 - w}quadtext{for}quad |w| < 1
    end{align*}



    We conclude that
    begin{align*}
    textbf{E}(X) = psum_{x=1}^{infty}x(1-p)^{x-1} = pF^{prime}(1-p)
    end{align*}



    Since $displaystyle F^{prime}(w) = frac{1}{(1-w)^{2}}$, it is now possible to obtain the desired result
    begin{align*}
    textbf{E}(X) = frac{p}{(1-(1-p))^{2}} = frac{1}{p}
    end{align*}



    In the case that my answer is correct, could someone provide me any other approach to this problem? I'd prefer solutions which do not involve sophisticated methods. Thanks in advance.










    share|cite|improve this question











    $endgroup$















      0












      0








      0





      $begingroup$


      Given that $X$ has geometric distribution with $p_{X}(x) = p(1-p)^{x-1}$, determine $textbf{E}(X)$.



      MY ATTEMPT



      begin{align*}
      textbf{E}(X) = sum_{x=1}^{infty}xp(1-p)^{x-1} = psum_{x=1}^{infty}x(1-p)^{x-1}
      end{align*}



      If we denote by



      begin{align*}
      F(w) = sum_{k=1}^{infty} w^{k} = frac{w}{1 - w}quadtext{for}quad |w| < 1
      end{align*}



      We conclude that
      begin{align*}
      textbf{E}(X) = psum_{x=1}^{infty}x(1-p)^{x-1} = pF^{prime}(1-p)
      end{align*}



      Since $displaystyle F^{prime}(w) = frac{1}{(1-w)^{2}}$, it is now possible to obtain the desired result
      begin{align*}
      textbf{E}(X) = frac{p}{(1-(1-p))^{2}} = frac{1}{p}
      end{align*}



      In the case that my answer is correct, could someone provide me any other approach to this problem? I'd prefer solutions which do not involve sophisticated methods. Thanks in advance.










      share|cite|improve this question











      $endgroup$




      Given that $X$ has geometric distribution with $p_{X}(x) = p(1-p)^{x-1}$, determine $textbf{E}(X)$.



      MY ATTEMPT



      begin{align*}
      textbf{E}(X) = sum_{x=1}^{infty}xp(1-p)^{x-1} = psum_{x=1}^{infty}x(1-p)^{x-1}
      end{align*}



      If we denote by



      begin{align*}
      F(w) = sum_{k=1}^{infty} w^{k} = frac{w}{1 - w}quadtext{for}quad |w| < 1
      end{align*}



      We conclude that
      begin{align*}
      textbf{E}(X) = psum_{x=1}^{infty}x(1-p)^{x-1} = pF^{prime}(1-p)
      end{align*}



      Since $displaystyle F^{prime}(w) = frac{1}{(1-w)^{2}}$, it is now possible to obtain the desired result
      begin{align*}
      textbf{E}(X) = frac{p}{(1-(1-p))^{2}} = frac{1}{p}
      end{align*}



      In the case that my answer is correct, could someone provide me any other approach to this problem? I'd prefer solutions which do not involve sophisticated methods. Thanks in advance.







      probability probability-theory proof-verification probability-distributions






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 22 at 22:47







      user1337

















      asked Jan 22 at 21:10









      user1337user1337

      46110




      46110






















          3 Answers
          3






          active

          oldest

          votes


















          1












          $begingroup$

          The geometric distribution gives the number of trials until the first success (including the successful one, in your mass function above) in a sequence of trials with probability of success $p$.



          The first trial is either a success (probability $p$) or a failure (probability $1-p$); if it is a success, you are done and $X=1$. If it is a failure, you are left with another geometric process which you must add one extra failure to.



          In other words,
          $$
          mathbb{E}[X]=pcdot1+(1-p)cdot(1+mathbb{E}[X])=1+(1-p)mathbb{E}[X].
          $$

          Subtracting $(1-p)mathbb{E}[X]$ from both sides yields
          $$
          mathbb{E}[X]-(1-p)mathbb{E}[X]=1,
          $$

          which simplifies to
          $$
          pmathbb{E}[X]=1qquadRightarrowqquadmathbb{E}[X]=frac{1}{p}.
          $$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Nice argument! Thanks for the contribution.
            $endgroup$
            – user1337
            Jan 22 at 23:04



















          1












          $begingroup$

          Your result is correct. You can also use a version of Fubini-Tonelli's theorem (i.e. changing order of summation):
          $$begin{eqnarray}
          sum_{j=0}^infty j(1-p)^{j-1}&=&sum_{j=0}^infty left(sum_{k=0}^{j-1} 1right)(1-p)^{j-1}\&=&sum_{k=0}^infty left(sum_{j=k+1}^infty(1-p)^{j-1} right)\
          &=&sum_{k=0}^infty frac{(1-p)^k}{p}\
          &=&frac{1}{p^2}.
          end{eqnarray}$$
          This gives
          $$
          Bbb E[X]=psum_{j=0}^infty j(1-p)^{j-1}=frac{1}{p}.
          $$


          Note: Note that $$binom{-2}{j}=frac{(-2)(-3)cdots(-1-j)}{j!}=(-1)^j(j+1).$$ Generalized binomial theorem also gives
          $$
          sum_{j=0}^infty (j+1)x^j=sum_{j=0}^infty binom{-2}{j}(-x)^j=(1-x)^{-2}.
          $$






          share|cite|improve this answer











          $endgroup$





















            1












            $begingroup$

            I can offer a slightly different way of doing this
            $$ E(X) = p sum_{x=1}^{infty} x(1-p)^{x-1} $$
            $$ (1-p)E(X) =psum_{x=1}^{infty} x(1-p)^{x} $$
            $$ E(X) - (1-p)E(X) = psum_{x=0}^{infty} (1-p)^{x} $$
            You can see this writing down the first few terms in each sum :
            $$ E(X) = p( 1 + 2(1-p) + 3(1-p)^2 ...)$$
            $$ (1-p)E(X) = p( (1-p)+ 2(1-p)^2 + 3(1-p)^3 ...) $$
            $$ E(X) - (1-p)E(X) =p (1+ (1-p) + (1-p)^2 + (1-p)^3 ...) $$
            $$ E(X) - (1-p)E(X) = p frac{1}{1-(1-p)} $$
            $$ E(X) =frac{1}{p}$$






            share|cite|improve this answer









            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "69"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3083699%2falternative-approaches-to-obtain-the-expected-value-of-the-geometric-distributio%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              The geometric distribution gives the number of trials until the first success (including the successful one, in your mass function above) in a sequence of trials with probability of success $p$.



              The first trial is either a success (probability $p$) or a failure (probability $1-p$); if it is a success, you are done and $X=1$. If it is a failure, you are left with another geometric process which you must add one extra failure to.



              In other words,
              $$
              mathbb{E}[X]=pcdot1+(1-p)cdot(1+mathbb{E}[X])=1+(1-p)mathbb{E}[X].
              $$

              Subtracting $(1-p)mathbb{E}[X]$ from both sides yields
              $$
              mathbb{E}[X]-(1-p)mathbb{E}[X]=1,
              $$

              which simplifies to
              $$
              pmathbb{E}[X]=1qquadRightarrowqquadmathbb{E}[X]=frac{1}{p}.
              $$






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                Nice argument! Thanks for the contribution.
                $endgroup$
                – user1337
                Jan 22 at 23:04
















              1












              $begingroup$

              The geometric distribution gives the number of trials until the first success (including the successful one, in your mass function above) in a sequence of trials with probability of success $p$.



              The first trial is either a success (probability $p$) or a failure (probability $1-p$); if it is a success, you are done and $X=1$. If it is a failure, you are left with another geometric process which you must add one extra failure to.



              In other words,
              $$
              mathbb{E}[X]=pcdot1+(1-p)cdot(1+mathbb{E}[X])=1+(1-p)mathbb{E}[X].
              $$

              Subtracting $(1-p)mathbb{E}[X]$ from both sides yields
              $$
              mathbb{E}[X]-(1-p)mathbb{E}[X]=1,
              $$

              which simplifies to
              $$
              pmathbb{E}[X]=1qquadRightarrowqquadmathbb{E}[X]=frac{1}{p}.
              $$






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                Nice argument! Thanks for the contribution.
                $endgroup$
                – user1337
                Jan 22 at 23:04














              1












              1








              1





              $begingroup$

              The geometric distribution gives the number of trials until the first success (including the successful one, in your mass function above) in a sequence of trials with probability of success $p$.



              The first trial is either a success (probability $p$) or a failure (probability $1-p$); if it is a success, you are done and $X=1$. If it is a failure, you are left with another geometric process which you must add one extra failure to.



              In other words,
              $$
              mathbb{E}[X]=pcdot1+(1-p)cdot(1+mathbb{E}[X])=1+(1-p)mathbb{E}[X].
              $$

              Subtracting $(1-p)mathbb{E}[X]$ from both sides yields
              $$
              mathbb{E}[X]-(1-p)mathbb{E}[X]=1,
              $$

              which simplifies to
              $$
              pmathbb{E}[X]=1qquadRightarrowqquadmathbb{E}[X]=frac{1}{p}.
              $$






              share|cite|improve this answer









              $endgroup$



              The geometric distribution gives the number of trials until the first success (including the successful one, in your mass function above) in a sequence of trials with probability of success $p$.



              The first trial is either a success (probability $p$) or a failure (probability $1-p$); if it is a success, you are done and $X=1$. If it is a failure, you are left with another geometric process which you must add one extra failure to.



              In other words,
              $$
              mathbb{E}[X]=pcdot1+(1-p)cdot(1+mathbb{E}[X])=1+(1-p)mathbb{E}[X].
              $$

              Subtracting $(1-p)mathbb{E}[X]$ from both sides yields
              $$
              mathbb{E}[X]-(1-p)mathbb{E}[X]=1,
              $$

              which simplifies to
              $$
              pmathbb{E}[X]=1qquadRightarrowqquadmathbb{E}[X]=frac{1}{p}.
              $$







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Jan 22 at 22:56









              Nick PetersonNick Peterson

              26.8k23962




              26.8k23962












              • $begingroup$
                Nice argument! Thanks for the contribution.
                $endgroup$
                – user1337
                Jan 22 at 23:04


















              • $begingroup$
                Nice argument! Thanks for the contribution.
                $endgroup$
                – user1337
                Jan 22 at 23:04
















              $begingroup$
              Nice argument! Thanks for the contribution.
              $endgroup$
              – user1337
              Jan 22 at 23:04




              $begingroup$
              Nice argument! Thanks for the contribution.
              $endgroup$
              – user1337
              Jan 22 at 23:04











              1












              $begingroup$

              Your result is correct. You can also use a version of Fubini-Tonelli's theorem (i.e. changing order of summation):
              $$begin{eqnarray}
              sum_{j=0}^infty j(1-p)^{j-1}&=&sum_{j=0}^infty left(sum_{k=0}^{j-1} 1right)(1-p)^{j-1}\&=&sum_{k=0}^infty left(sum_{j=k+1}^infty(1-p)^{j-1} right)\
              &=&sum_{k=0}^infty frac{(1-p)^k}{p}\
              &=&frac{1}{p^2}.
              end{eqnarray}$$
              This gives
              $$
              Bbb E[X]=psum_{j=0}^infty j(1-p)^{j-1}=frac{1}{p}.
              $$


              Note: Note that $$binom{-2}{j}=frac{(-2)(-3)cdots(-1-j)}{j!}=(-1)^j(j+1).$$ Generalized binomial theorem also gives
              $$
              sum_{j=0}^infty (j+1)x^j=sum_{j=0}^infty binom{-2}{j}(-x)^j=(1-x)^{-2}.
              $$






              share|cite|improve this answer











              $endgroup$


















                1












                $begingroup$

                Your result is correct. You can also use a version of Fubini-Tonelli's theorem (i.e. changing order of summation):
                $$begin{eqnarray}
                sum_{j=0}^infty j(1-p)^{j-1}&=&sum_{j=0}^infty left(sum_{k=0}^{j-1} 1right)(1-p)^{j-1}\&=&sum_{k=0}^infty left(sum_{j=k+1}^infty(1-p)^{j-1} right)\
                &=&sum_{k=0}^infty frac{(1-p)^k}{p}\
                &=&frac{1}{p^2}.
                end{eqnarray}$$
                This gives
                $$
                Bbb E[X]=psum_{j=0}^infty j(1-p)^{j-1}=frac{1}{p}.
                $$


                Note: Note that $$binom{-2}{j}=frac{(-2)(-3)cdots(-1-j)}{j!}=(-1)^j(j+1).$$ Generalized binomial theorem also gives
                $$
                sum_{j=0}^infty (j+1)x^j=sum_{j=0}^infty binom{-2}{j}(-x)^j=(1-x)^{-2}.
                $$






                share|cite|improve this answer











                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  Your result is correct. You can also use a version of Fubini-Tonelli's theorem (i.e. changing order of summation):
                  $$begin{eqnarray}
                  sum_{j=0}^infty j(1-p)^{j-1}&=&sum_{j=0}^infty left(sum_{k=0}^{j-1} 1right)(1-p)^{j-1}\&=&sum_{k=0}^infty left(sum_{j=k+1}^infty(1-p)^{j-1} right)\
                  &=&sum_{k=0}^infty frac{(1-p)^k}{p}\
                  &=&frac{1}{p^2}.
                  end{eqnarray}$$
                  This gives
                  $$
                  Bbb E[X]=psum_{j=0}^infty j(1-p)^{j-1}=frac{1}{p}.
                  $$


                  Note: Note that $$binom{-2}{j}=frac{(-2)(-3)cdots(-1-j)}{j!}=(-1)^j(j+1).$$ Generalized binomial theorem also gives
                  $$
                  sum_{j=0}^infty (j+1)x^j=sum_{j=0}^infty binom{-2}{j}(-x)^j=(1-x)^{-2}.
                  $$






                  share|cite|improve this answer











                  $endgroup$



                  Your result is correct. You can also use a version of Fubini-Tonelli's theorem (i.e. changing order of summation):
                  $$begin{eqnarray}
                  sum_{j=0}^infty j(1-p)^{j-1}&=&sum_{j=0}^infty left(sum_{k=0}^{j-1} 1right)(1-p)^{j-1}\&=&sum_{k=0}^infty left(sum_{j=k+1}^infty(1-p)^{j-1} right)\
                  &=&sum_{k=0}^infty frac{(1-p)^k}{p}\
                  &=&frac{1}{p^2}.
                  end{eqnarray}$$
                  This gives
                  $$
                  Bbb E[X]=psum_{j=0}^infty j(1-p)^{j-1}=frac{1}{p}.
                  $$


                  Note: Note that $$binom{-2}{j}=frac{(-2)(-3)cdots(-1-j)}{j!}=(-1)^j(j+1).$$ Generalized binomial theorem also gives
                  $$
                  sum_{j=0}^infty (j+1)x^j=sum_{j=0}^infty binom{-2}{j}(-x)^j=(1-x)^{-2}.
                  $$







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Jan 22 at 21:31

























                  answered Jan 22 at 21:24









                  SongSong

                  17k21145




                  17k21145























                      1












                      $begingroup$

                      I can offer a slightly different way of doing this
                      $$ E(X) = p sum_{x=1}^{infty} x(1-p)^{x-1} $$
                      $$ (1-p)E(X) =psum_{x=1}^{infty} x(1-p)^{x} $$
                      $$ E(X) - (1-p)E(X) = psum_{x=0}^{infty} (1-p)^{x} $$
                      You can see this writing down the first few terms in each sum :
                      $$ E(X) = p( 1 + 2(1-p) + 3(1-p)^2 ...)$$
                      $$ (1-p)E(X) = p( (1-p)+ 2(1-p)^2 + 3(1-p)^3 ...) $$
                      $$ E(X) - (1-p)E(X) =p (1+ (1-p) + (1-p)^2 + (1-p)^3 ...) $$
                      $$ E(X) - (1-p)E(X) = p frac{1}{1-(1-p)} $$
                      $$ E(X) =frac{1}{p}$$






                      share|cite|improve this answer









                      $endgroup$


















                        1












                        $begingroup$

                        I can offer a slightly different way of doing this
                        $$ E(X) = p sum_{x=1}^{infty} x(1-p)^{x-1} $$
                        $$ (1-p)E(X) =psum_{x=1}^{infty} x(1-p)^{x} $$
                        $$ E(X) - (1-p)E(X) = psum_{x=0}^{infty} (1-p)^{x} $$
                        You can see this writing down the first few terms in each sum :
                        $$ E(X) = p( 1 + 2(1-p) + 3(1-p)^2 ...)$$
                        $$ (1-p)E(X) = p( (1-p)+ 2(1-p)^2 + 3(1-p)^3 ...) $$
                        $$ E(X) - (1-p)E(X) =p (1+ (1-p) + (1-p)^2 + (1-p)^3 ...) $$
                        $$ E(X) - (1-p)E(X) = p frac{1}{1-(1-p)} $$
                        $$ E(X) =frac{1}{p}$$






                        share|cite|improve this answer









                        $endgroup$
















                          1












                          1








                          1





                          $begingroup$

                          I can offer a slightly different way of doing this
                          $$ E(X) = p sum_{x=1}^{infty} x(1-p)^{x-1} $$
                          $$ (1-p)E(X) =psum_{x=1}^{infty} x(1-p)^{x} $$
                          $$ E(X) - (1-p)E(X) = psum_{x=0}^{infty} (1-p)^{x} $$
                          You can see this writing down the first few terms in each sum :
                          $$ E(X) = p( 1 + 2(1-p) + 3(1-p)^2 ...)$$
                          $$ (1-p)E(X) = p( (1-p)+ 2(1-p)^2 + 3(1-p)^3 ...) $$
                          $$ E(X) - (1-p)E(X) =p (1+ (1-p) + (1-p)^2 + (1-p)^3 ...) $$
                          $$ E(X) - (1-p)E(X) = p frac{1}{1-(1-p)} $$
                          $$ E(X) =frac{1}{p}$$






                          share|cite|improve this answer









                          $endgroup$



                          I can offer a slightly different way of doing this
                          $$ E(X) = p sum_{x=1}^{infty} x(1-p)^{x-1} $$
                          $$ (1-p)E(X) =psum_{x=1}^{infty} x(1-p)^{x} $$
                          $$ E(X) - (1-p)E(X) = psum_{x=0}^{infty} (1-p)^{x} $$
                          You can see this writing down the first few terms in each sum :
                          $$ E(X) = p( 1 + 2(1-p) + 3(1-p)^2 ...)$$
                          $$ (1-p)E(X) = p( (1-p)+ 2(1-p)^2 + 3(1-p)^3 ...) $$
                          $$ E(X) - (1-p)E(X) =p (1+ (1-p) + (1-p)^2 + (1-p)^3 ...) $$
                          $$ E(X) - (1-p)E(X) = p frac{1}{1-(1-p)} $$
                          $$ E(X) =frac{1}{p}$$







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered Jan 22 at 21:52









                          Rohan NuckchadyRohan Nuckchady

                          1111




                          1111






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3083699%2falternative-approaches-to-obtain-the-expected-value-of-the-geometric-distributio%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              MongoDB - Not Authorized To Execute Command

                              Npm cannot find a required file even through it is in the searched directory

                              in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith