Why does using the identity $e^x=1/e^{-x}$ work better in evaluating negative numbers in the finite taylor...












3












$begingroup$


Was in class where we look at both and notice that there is a difference in the error, but we didn't go into why. The other method used the taylor expansion
$$e^x=1+x+frac{x^2}{2!}+frac{x^3}{3!}+...$$



Why does using the identity $$e^x=frac{1}{e^{-x}}$$ work better for negative numbers?



To try and clear up what I'm asking, we coded a program to graph the error of the taylor series expansion of $e^x$ to n terms. We then coded another one to use the identity mentioned above in the expansion and noticed that it worked better for negative numbers, why is that the case?



For comparison, we were comparing the the absolute fractional error of the sums $$frac{T(x,N)-e^x}{e^x}$$
for each method (with the identity and without). Where $T(x,N)$ is the N-th order taylor series expansion of $e^x$. We plotted the error against the order of expansion (number of terms considered in the sum). We evaluated various numbers, and saw that without using the identity, the error was higher for negative numbers.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    Why was this downvoted?
    $endgroup$
    – parsiad
    Jan 29 '17 at 21:10






  • 1




    $begingroup$
    @parsiad based on the closed-vote, it seems that someone found it unclear what exactly is being asked. I would agree that it's not clear what the asker means.
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:21






  • 1




    $begingroup$
    @Citut when you say $1/e^{-x}$ works better "for negative numbers", do you mean $1/e^{-x}$ works better "for negative values of $x$"?
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:22






  • 1




    $begingroup$
    @Citut also, could you clarify what "works better" is supposed to mean (or what you think it should mean)? Do you mean that the series for $1/e^{-x}$ is supposed to converge more quickly?
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:23






  • 1




    $begingroup$
    @Omnomnomnom it is pretty obvious that "converge quicklier" is exactly what OP meant.
    $endgroup$
    – Wolfram
    Jan 29 '17 at 21:35
















3












$begingroup$


Was in class where we look at both and notice that there is a difference in the error, but we didn't go into why. The other method used the taylor expansion
$$e^x=1+x+frac{x^2}{2!}+frac{x^3}{3!}+...$$



Why does using the identity $$e^x=frac{1}{e^{-x}}$$ work better for negative numbers?



To try and clear up what I'm asking, we coded a program to graph the error of the taylor series expansion of $e^x$ to n terms. We then coded another one to use the identity mentioned above in the expansion and noticed that it worked better for negative numbers, why is that the case?



For comparison, we were comparing the the absolute fractional error of the sums $$frac{T(x,N)-e^x}{e^x}$$
for each method (with the identity and without). Where $T(x,N)$ is the N-th order taylor series expansion of $e^x$. We plotted the error against the order of expansion (number of terms considered in the sum). We evaluated various numbers, and saw that without using the identity, the error was higher for negative numbers.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    Why was this downvoted?
    $endgroup$
    – parsiad
    Jan 29 '17 at 21:10






  • 1




    $begingroup$
    @parsiad based on the closed-vote, it seems that someone found it unclear what exactly is being asked. I would agree that it's not clear what the asker means.
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:21






  • 1




    $begingroup$
    @Citut when you say $1/e^{-x}$ works better "for negative numbers", do you mean $1/e^{-x}$ works better "for negative values of $x$"?
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:22






  • 1




    $begingroup$
    @Citut also, could you clarify what "works better" is supposed to mean (or what you think it should mean)? Do you mean that the series for $1/e^{-x}$ is supposed to converge more quickly?
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:23






  • 1




    $begingroup$
    @Omnomnomnom it is pretty obvious that "converge quicklier" is exactly what OP meant.
    $endgroup$
    – Wolfram
    Jan 29 '17 at 21:35














3












3








3


1



$begingroup$


Was in class where we look at both and notice that there is a difference in the error, but we didn't go into why. The other method used the taylor expansion
$$e^x=1+x+frac{x^2}{2!}+frac{x^3}{3!}+...$$



Why does using the identity $$e^x=frac{1}{e^{-x}}$$ work better for negative numbers?



To try and clear up what I'm asking, we coded a program to graph the error of the taylor series expansion of $e^x$ to n terms. We then coded another one to use the identity mentioned above in the expansion and noticed that it worked better for negative numbers, why is that the case?



For comparison, we were comparing the the absolute fractional error of the sums $$frac{T(x,N)-e^x}{e^x}$$
for each method (with the identity and without). Where $T(x,N)$ is the N-th order taylor series expansion of $e^x$. We plotted the error against the order of expansion (number of terms considered in the sum). We evaluated various numbers, and saw that without using the identity, the error was higher for negative numbers.










share|cite|improve this question











$endgroup$




Was in class where we look at both and notice that there is a difference in the error, but we didn't go into why. The other method used the taylor expansion
$$e^x=1+x+frac{x^2}{2!}+frac{x^3}{3!}+...$$



Why does using the identity $$e^x=frac{1}{e^{-x}}$$ work better for negative numbers?



To try and clear up what I'm asking, we coded a program to graph the error of the taylor series expansion of $e^x$ to n terms. We then coded another one to use the identity mentioned above in the expansion and noticed that it worked better for negative numbers, why is that the case?



For comparison, we were comparing the the absolute fractional error of the sums $$frac{T(x,N)-e^x}{e^x}$$
for each method (with the identity and without). Where $T(x,N)$ is the N-th order taylor series expansion of $e^x$. We plotted the error against the order of expansion (number of terms considered in the sum). We evaluated various numbers, and saw that without using the identity, the error was higher for negative numbers.







taylor-expansion matlab computational-mathematics






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 29 '17 at 21:52







Citut

















asked Jan 29 '17 at 21:03









CitutCitut

674




674








  • 1




    $begingroup$
    Why was this downvoted?
    $endgroup$
    – parsiad
    Jan 29 '17 at 21:10






  • 1




    $begingroup$
    @parsiad based on the closed-vote, it seems that someone found it unclear what exactly is being asked. I would agree that it's not clear what the asker means.
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:21






  • 1




    $begingroup$
    @Citut when you say $1/e^{-x}$ works better "for negative numbers", do you mean $1/e^{-x}$ works better "for negative values of $x$"?
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:22






  • 1




    $begingroup$
    @Citut also, could you clarify what "works better" is supposed to mean (or what you think it should mean)? Do you mean that the series for $1/e^{-x}$ is supposed to converge more quickly?
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:23






  • 1




    $begingroup$
    @Omnomnomnom it is pretty obvious that "converge quicklier" is exactly what OP meant.
    $endgroup$
    – Wolfram
    Jan 29 '17 at 21:35














  • 1




    $begingroup$
    Why was this downvoted?
    $endgroup$
    – parsiad
    Jan 29 '17 at 21:10






  • 1




    $begingroup$
    @parsiad based on the closed-vote, it seems that someone found it unclear what exactly is being asked. I would agree that it's not clear what the asker means.
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:21






  • 1




    $begingroup$
    @Citut when you say $1/e^{-x}$ works better "for negative numbers", do you mean $1/e^{-x}$ works better "for negative values of $x$"?
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:22






  • 1




    $begingroup$
    @Citut also, could you clarify what "works better" is supposed to mean (or what you think it should mean)? Do you mean that the series for $1/e^{-x}$ is supposed to converge more quickly?
    $endgroup$
    – Omnomnomnom
    Jan 29 '17 at 21:23






  • 1




    $begingroup$
    @Omnomnomnom it is pretty obvious that "converge quicklier" is exactly what OP meant.
    $endgroup$
    – Wolfram
    Jan 29 '17 at 21:35








1




1




$begingroup$
Why was this downvoted?
$endgroup$
– parsiad
Jan 29 '17 at 21:10




$begingroup$
Why was this downvoted?
$endgroup$
– parsiad
Jan 29 '17 at 21:10




1




1




$begingroup$
@parsiad based on the closed-vote, it seems that someone found it unclear what exactly is being asked. I would agree that it's not clear what the asker means.
$endgroup$
– Omnomnomnom
Jan 29 '17 at 21:21




$begingroup$
@parsiad based on the closed-vote, it seems that someone found it unclear what exactly is being asked. I would agree that it's not clear what the asker means.
$endgroup$
– Omnomnomnom
Jan 29 '17 at 21:21




1




1




$begingroup$
@Citut when you say $1/e^{-x}$ works better "for negative numbers", do you mean $1/e^{-x}$ works better "for negative values of $x$"?
$endgroup$
– Omnomnomnom
Jan 29 '17 at 21:22




$begingroup$
@Citut when you say $1/e^{-x}$ works better "for negative numbers", do you mean $1/e^{-x}$ works better "for negative values of $x$"?
$endgroup$
– Omnomnomnom
Jan 29 '17 at 21:22




1




1




$begingroup$
@Citut also, could you clarify what "works better" is supposed to mean (or what you think it should mean)? Do you mean that the series for $1/e^{-x}$ is supposed to converge more quickly?
$endgroup$
– Omnomnomnom
Jan 29 '17 at 21:23




$begingroup$
@Citut also, could you clarify what "works better" is supposed to mean (or what you think it should mean)? Do you mean that the series for $1/e^{-x}$ is supposed to converge more quickly?
$endgroup$
– Omnomnomnom
Jan 29 '17 at 21:23




1




1




$begingroup$
@Omnomnomnom it is pretty obvious that "converge quicklier" is exactly what OP meant.
$endgroup$
– Wolfram
Jan 29 '17 at 21:35




$begingroup$
@Omnomnomnom it is pretty obvious that "converge quicklier" is exactly what OP meant.
$endgroup$
– Wolfram
Jan 29 '17 at 21:35










2 Answers
2






active

oldest

votes


















2












$begingroup$

Intuitive answer: when we sum the series for positive up to $x^n/n!$, the order of absolute error margin is roughly the module of the next term $|x^{n+1}/(n+1)!|$ of the series, because each next term is much less then the previous for large enough $n$. So for $e^{-x}$ and $e^x$ the absolute error margin is roughly the same if we add up to the $n$th term, because these modules coincide for opposite numbers. However, for $x<0$ $e^x<e^{-x}$, and so the relative error margin is much higher for $e^x$, than for $e^{-x}$. However, if we calculate $e^x$ as $1/e^{-x}$ than the relative margin is the same as of $e^{-x}$, and thus is better.






share|cite|improve this answer











$endgroup$





















    0












    $begingroup$

    I know it's really late, but since there is no right answer I'll answer it myself.



    First thing you need to know is what is called "Machine Epsilon". When a computer represents a number in floating point (32 or 64 bits) let's say the number 1, there is a slight margin of error ($2^{-23}$ for 32 bit and $2^{-52}$ for 64 bit). That is what we call $epsilon$.



    If we add/substract a number $x$ < $epsilon$ to another number $y$ our machine will compute it as $y$, not considering the number $x$ since it is < $epsilon$.



    The Taylor series $e^x$ for $x$ < 0, the factors of the polynomial look like this:
    $$e^X = -f_1 + f_2 - f_3 + f_4 pmcdots pm f_n$$
    As you see we keep alternating between positive and negative factors. When the order gets bigger the new factors get smaller and more precise so there's room for adding or substracting numbers around $epsilon$ thus creating unwanted error.



    If we compute $e^x$ for $x$ < 0 like $frac{1}{e^{|x|}}$ all the factors of the Taylor series are positive and we avoid $epsilon$ errors. Thus, the method converges in less iterations.



    PD: I just created an account to answer this I hope it's understandable and answered your question :)






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2119964%2fwhy-does-using-the-identity-ex-1-e-x-work-better-in-evaluating-negative-nu%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2












      $begingroup$

      Intuitive answer: when we sum the series for positive up to $x^n/n!$, the order of absolute error margin is roughly the module of the next term $|x^{n+1}/(n+1)!|$ of the series, because each next term is much less then the previous for large enough $n$. So for $e^{-x}$ and $e^x$ the absolute error margin is roughly the same if we add up to the $n$th term, because these modules coincide for opposite numbers. However, for $x<0$ $e^x<e^{-x}$, and so the relative error margin is much higher for $e^x$, than for $e^{-x}$. However, if we calculate $e^x$ as $1/e^{-x}$ than the relative margin is the same as of $e^{-x}$, and thus is better.






      share|cite|improve this answer











      $endgroup$


















        2












        $begingroup$

        Intuitive answer: when we sum the series for positive up to $x^n/n!$, the order of absolute error margin is roughly the module of the next term $|x^{n+1}/(n+1)!|$ of the series, because each next term is much less then the previous for large enough $n$. So for $e^{-x}$ and $e^x$ the absolute error margin is roughly the same if we add up to the $n$th term, because these modules coincide for opposite numbers. However, for $x<0$ $e^x<e^{-x}$, and so the relative error margin is much higher for $e^x$, than for $e^{-x}$. However, if we calculate $e^x$ as $1/e^{-x}$ than the relative margin is the same as of $e^{-x}$, and thus is better.






        share|cite|improve this answer











        $endgroup$
















          2












          2








          2





          $begingroup$

          Intuitive answer: when we sum the series for positive up to $x^n/n!$, the order of absolute error margin is roughly the module of the next term $|x^{n+1}/(n+1)!|$ of the series, because each next term is much less then the previous for large enough $n$. So for $e^{-x}$ and $e^x$ the absolute error margin is roughly the same if we add up to the $n$th term, because these modules coincide for opposite numbers. However, for $x<0$ $e^x<e^{-x}$, and so the relative error margin is much higher for $e^x$, than for $e^{-x}$. However, if we calculate $e^x$ as $1/e^{-x}$ than the relative margin is the same as of $e^{-x}$, and thus is better.






          share|cite|improve this answer











          $endgroup$



          Intuitive answer: when we sum the series for positive up to $x^n/n!$, the order of absolute error margin is roughly the module of the next term $|x^{n+1}/(n+1)!|$ of the series, because each next term is much less then the previous for large enough $n$. So for $e^{-x}$ and $e^x$ the absolute error margin is roughly the same if we add up to the $n$th term, because these modules coincide for opposite numbers. However, for $x<0$ $e^x<e^{-x}$, and so the relative error margin is much higher for $e^x$, than for $e^{-x}$. However, if we calculate $e^x$ as $1/e^{-x}$ than the relative margin is the same as of $e^{-x}$, and thus is better.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Jan 29 '17 at 21:50

























          answered Jan 29 '17 at 21:44









          WolframWolfram

          1,840212




          1,840212























              0












              $begingroup$

              I know it's really late, but since there is no right answer I'll answer it myself.



              First thing you need to know is what is called "Machine Epsilon". When a computer represents a number in floating point (32 or 64 bits) let's say the number 1, there is a slight margin of error ($2^{-23}$ for 32 bit and $2^{-52}$ for 64 bit). That is what we call $epsilon$.



              If we add/substract a number $x$ < $epsilon$ to another number $y$ our machine will compute it as $y$, not considering the number $x$ since it is < $epsilon$.



              The Taylor series $e^x$ for $x$ < 0, the factors of the polynomial look like this:
              $$e^X = -f_1 + f_2 - f_3 + f_4 pmcdots pm f_n$$
              As you see we keep alternating between positive and negative factors. When the order gets bigger the new factors get smaller and more precise so there's room for adding or substracting numbers around $epsilon$ thus creating unwanted error.



              If we compute $e^x$ for $x$ < 0 like $frac{1}{e^{|x|}}$ all the factors of the Taylor series are positive and we avoid $epsilon$ errors. Thus, the method converges in less iterations.



              PD: I just created an account to answer this I hope it's understandable and answered your question :)






              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                I know it's really late, but since there is no right answer I'll answer it myself.



                First thing you need to know is what is called "Machine Epsilon". When a computer represents a number in floating point (32 or 64 bits) let's say the number 1, there is a slight margin of error ($2^{-23}$ for 32 bit and $2^{-52}$ for 64 bit). That is what we call $epsilon$.



                If we add/substract a number $x$ < $epsilon$ to another number $y$ our machine will compute it as $y$, not considering the number $x$ since it is < $epsilon$.



                The Taylor series $e^x$ for $x$ < 0, the factors of the polynomial look like this:
                $$e^X = -f_1 + f_2 - f_3 + f_4 pmcdots pm f_n$$
                As you see we keep alternating between positive and negative factors. When the order gets bigger the new factors get smaller and more precise so there's room for adding or substracting numbers around $epsilon$ thus creating unwanted error.



                If we compute $e^x$ for $x$ < 0 like $frac{1}{e^{|x|}}$ all the factors of the Taylor series are positive and we avoid $epsilon$ errors. Thus, the method converges in less iterations.



                PD: I just created an account to answer this I hope it's understandable and answered your question :)






                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  I know it's really late, but since there is no right answer I'll answer it myself.



                  First thing you need to know is what is called "Machine Epsilon". When a computer represents a number in floating point (32 or 64 bits) let's say the number 1, there is a slight margin of error ($2^{-23}$ for 32 bit and $2^{-52}$ for 64 bit). That is what we call $epsilon$.



                  If we add/substract a number $x$ < $epsilon$ to another number $y$ our machine will compute it as $y$, not considering the number $x$ since it is < $epsilon$.



                  The Taylor series $e^x$ for $x$ < 0, the factors of the polynomial look like this:
                  $$e^X = -f_1 + f_2 - f_3 + f_4 pmcdots pm f_n$$
                  As you see we keep alternating between positive and negative factors. When the order gets bigger the new factors get smaller and more precise so there's room for adding or substracting numbers around $epsilon$ thus creating unwanted error.



                  If we compute $e^x$ for $x$ < 0 like $frac{1}{e^{|x|}}$ all the factors of the Taylor series are positive and we avoid $epsilon$ errors. Thus, the method converges in less iterations.



                  PD: I just created an account to answer this I hope it's understandable and answered your question :)






                  share|cite|improve this answer









                  $endgroup$



                  I know it's really late, but since there is no right answer I'll answer it myself.



                  First thing you need to know is what is called "Machine Epsilon". When a computer represents a number in floating point (32 or 64 bits) let's say the number 1, there is a slight margin of error ($2^{-23}$ for 32 bit and $2^{-52}$ for 64 bit). That is what we call $epsilon$.



                  If we add/substract a number $x$ < $epsilon$ to another number $y$ our machine will compute it as $y$, not considering the number $x$ since it is < $epsilon$.



                  The Taylor series $e^x$ for $x$ < 0, the factors of the polynomial look like this:
                  $$e^X = -f_1 + f_2 - f_3 + f_4 pmcdots pm f_n$$
                  As you see we keep alternating between positive and negative factors. When the order gets bigger the new factors get smaller and more precise so there's room for adding or substracting numbers around $epsilon$ thus creating unwanted error.



                  If we compute $e^x$ for $x$ < 0 like $frac{1}{e^{|x|}}$ all the factors of the Taylor series are positive and we avoid $epsilon$ errors. Thus, the method converges in less iterations.



                  PD: I just created an account to answer this I hope it's understandable and answered your question :)







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jan 15 at 12:47









                  Ferran Capallera GuiradoFerran Capallera Guirado

                  1




                  1






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2119964%2fwhy-does-using-the-identity-ex-1-e-x-work-better-in-evaluating-negative-nu%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      MongoDB - Not Authorized To Execute Command

                      How to fix TextFormField cause rebuild widget in Flutter

                      in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith