Avoiding numerical cancellation question for $sin x -sin y$ for $x approx y$












7












$begingroup$


When trying to avoid cancellation, one tries to reformulate the equation in order to avoid subtraction between almost equal terms.



In $sin (x) - sin (y), x approx y$ the suggested solution is to reformulate it to $$2cosleft(frac{x+y}{2}right)sinleft(frac{x-y}{2}right)$$



But I don't understand how it is any better? The subtraction between $x, y$ remains. Is it because $lvert,sin(x)-sin(y),rvertleq lvert,x-y,rvert$, so cancellation is less likely to happen between $x,y$ than the sines?










share|cite|improve this question











$endgroup$

















    7












    $begingroup$


    When trying to avoid cancellation, one tries to reformulate the equation in order to avoid subtraction between almost equal terms.



    In $sin (x) - sin (y), x approx y$ the suggested solution is to reformulate it to $$2cosleft(frac{x+y}{2}right)sinleft(frac{x-y}{2}right)$$



    But I don't understand how it is any better? The subtraction between $x, y$ remains. Is it because $lvert,sin(x)-sin(y),rvertleq lvert,x-y,rvert$, so cancellation is less likely to happen between $x,y$ than the sines?










    share|cite|improve this question











    $endgroup$















      7












      7








      7


      1



      $begingroup$


      When trying to avoid cancellation, one tries to reformulate the equation in order to avoid subtraction between almost equal terms.



      In $sin (x) - sin (y), x approx y$ the suggested solution is to reformulate it to $$2cosleft(frac{x+y}{2}right)sinleft(frac{x-y}{2}right)$$



      But I don't understand how it is any better? The subtraction between $x, y$ remains. Is it because $lvert,sin(x)-sin(y),rvertleq lvert,x-y,rvert$, so cancellation is less likely to happen between $x,y$ than the sines?










      share|cite|improve this question











      $endgroup$




      When trying to avoid cancellation, one tries to reformulate the equation in order to avoid subtraction between almost equal terms.



      In $sin (x) - sin (y), x approx y$ the suggested solution is to reformulate it to $$2cosleft(frac{x+y}{2}right)sinleft(frac{x-y}{2}right)$$



      But I don't understand how it is any better? The subtraction between $x, y$ remains. Is it because $lvert,sin(x)-sin(y),rvertleq lvert,x-y,rvert$, so cancellation is less likely to happen between $x,y$ than the sines?







      trigonometry numerical-methods catastrophic-cancellation






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 22 at 6:32









      Martin Sleziak

      44.7k10119272




      44.7k10119272










      asked Oct 24 '17 at 12:25









      B.SwanB.Swan

      1,0541720




      1,0541720






















          3 Answers
          3






          active

          oldest

          votes


















          3












          $begingroup$

          It really depends on how exactly $x$ and $y$ are given. Frequently what we really have is not $x$ and $y$ but $x$ and $y-x$. We can then gain precision if we can analytically write the subtraction of nearly equal numbers exclusively in terms of $y-x$, like you did here, because we are already given $y-x$ accurately (more accurately than we would get if we computed it directly).



          There are actually some standard library functions out there specialized to this exact purpose, for example "log1p", which is used to compute $log(1+x)$ for small $x$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            So assuming $x=y+h$ it would become $2cos (x+frac{h}{2})sin(frac{h}{2})$. Can it be assumed that $y-x$ is given though? As a beginner in numerics I am never sure when the exact values are available.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 12:46












          • $begingroup$
            @B.Swan Right. Then at least the $sin(h/2)$ can be accurately computed if $h$ is already given accurately.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:47










          • $begingroup$
            @B.Swan It really depends on how the problem is given, as I said. But if $y-x$ is not already given accurately, then this trick gives a modest benefit at best (because you will already commit severe error inside the sine). Let me put it another way: this approach does not really help you define a function delta_sin(x,y). It helps you define a function delta_sin(x,h). If your caller needs delta_sin(x,y), then you will need to do something else to help them. But if they need delta_sin(x,h), then you can help them. Does that make sense?
            $endgroup$
            – Ian
            Oct 24 '17 at 12:48





















          2












          $begingroup$

          One possibility: To determine the sine you use the Maclaurin series, and the faster this converges the fewer ill-comditioned operations you need to perform for the factor $sin(frac{x-y}{2})$. The small argument in that factor gets you there.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Unless $x,y$ are large and their sines are nearly equal, I don't think this has anything to do with it.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:33












          • $begingroup$
            If $x$ and $y$ are large the Maclaurin series themselves can become ill-conditioned. You can use periodicity and symmetry to reduce them if you have a sufficiently accurate rendering of $pi$.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:37










          • $begingroup$
            That periodicity trick is actually very badly conditioned, cf. math.stackexchange.com/questions/1561713/… What I'm really saying here is that if $x,y$ are, say, confined to $[-3pi/4,3pi/4]$, the series themselves are not badly conditioned but the difference is badly conditioned. In these cases accelerating the convergence of the series does you rather little good. (Indeed, you actually pass the problem off to the cosine function without making it any better.)
            $endgroup$
            – Ian
            Oct 24 '17 at 12:40












          • $begingroup$
            Periodicity and symmetry. You can reduce arguments to an absolute value less than or equal to $pi/4$, actually.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:42






          • 1




            $begingroup$
            Confining to $[-pi/4,pi/4]$ doesn't improve matters much more than confining to $[-3pi/4,3pi/4]$. Indeed the problem with keeping the series well-conditioned is to avoid being close to a zero of sine other than $x=0$. But again, provided you stick to a range where the series themselves are well-conditioned (or you use another method entirely to compute the trig functions themselves), the problem of computing the difference persists.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:43





















          2












          $begingroup$

          While a function $f : A to B$ is a triple, consisting of a domain $A$, a codomain $B$ and a rule which assigns to each element $x in A$ exactly one element $f(x) in B$, too many focus exclusively on rule and forget to carefully specify the domain and the codomain.



          In this case the function in question is $f : mathcal F times mathcal F rightarrow mathbb R$, where $$f(x,y) = sin(x) - sin(y),$$
          and $mathcal F$ is the set of machine numbers, say, double precision floating point numbers. I will explain below why I know that this is the right domain.



          The problem of computing a difference $d = a - b$ between two real numbers $a$ and $b$ is ill conditioned when $a approx b$. Indeed if $hat{a} = a(1+delta_a)$ and $hat{b} = b(1 + delta_b)$ are the best available approximations of $a$ and $b$, then we can not hope to compute a better approximation of $d$ than $hat{d} = hat{a} - hat{b}$. The relative error $$r = frac{d - hat{d}}{d},$$
          satisfies the bound
          $$ |r| leq frac{|a| + |b|}{|a-b|} max{|delta_a|,|delta_b|}$$
          When $a approx b$, we can not guarantee that the difference $d$ is computed with a small relative error. In practice, the relative error is large. We say that the subtraction magnifies the error committed when replacing $a$ with $hat{a}$ and $b$ with $hat{b}$.



          In your situation $a = sin(x)$ and $b = sin(y)$. Errors are committed when computing the sine function. No matter how skilled we are, the best we can hope for is to obtain the floating point representation of $a$, i.e. $text{fl}(a) = sin(x)(1 + delta)$, where $|delta| leq u$ and $u$ is the unit roundoff. Why? The computer may well have extra wide registers for internal use, but eventually, the result has to be rounded to, say, double precision, so that the result can be stored in memory. It follows, that if we compute $f$ using the definition and $x approx y$, then the computed result will have relative error which is many times the unit roundoff.



          In order to avoid the offending subtraction, we turn to the function $g : mathcal F times mathcal F to mathbb R$ given by
          $$ g(x,y) = 2 cos left( frac{x+y}{2} right) sin left(frac{x-y}{2} right)$$
          In absence of rounding errors $f(x,y) = g(x,y)$, but in floating point arithmetic they behave quite differently. The subtraction of two floating point numbers $x$ and $y$ is perfectly safe. In fact, if $y/2 leq x leq 2y$, then subtraction is done with one guard digit, then $x-y$ is computed exactly.



          We are not entirely in the clear, as $x + y$ need not be a floating point number, but is computed with a relative error bounded by the unit roundoff. In the unfortunate event that $(x+y)/2 approx (frac{1}{2} + k) pi$, where $k in mathbb Z$ the calculation of $g$ suffers from the fact that cosine is ill conditioned near a root.



          Using a conditional to pick the correct expressions allows us to cover a larger subset of the domain.






          In general, why $mathcal F$ rather than $mathbb R$? Consider the simpler problem of computing $f : mathbb R rightarrow mathbb R$. In general, you do not know the exact value of $x$, and the best you can hope for is $hat{x}$, the floating point represen-tation of $x$. The impact of this error is controlled by the condition number of $f$. There is nothing you can do about large condition numbers, except switch to better hardware of simulate a smaller unit roundoff $u'$. This leaves you with the task of computing $f(hat{x})$, where $hat{x} in mathcal F$ is a machine number. That is why $mathcal F$ is the natural domain during this the second stage of designing an algorithm for computing approximations of $f : mathbb R to mathbb R$.




          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            The book I am using ("Algorithmic mathematics" by Vygen/Hougardy, I think it's only available in German) actually does define a computation problem as a triple with two sets $A, B$ and a relation $R$, which is a subset of $A times B$, and in numerical computational problems it actually sets $A,B$ as subsets of $mathbb{R}$, not necessarily as the machine numbers. It does that in the definition of the condition, too. I will take my time to think about your answer though and discuss it with my professor, thank you.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 20:38












          • $begingroup$
            @B.Swan Conditioning should certainly be considered for subsets of $mathbb R$. Once this first stage is complete, one must consider how the target function is evaluated for floating point numbers. This is frequently difficult as seen above, and a rewrite or an approximation must be sought. Then comes the third and final stage which consist evalutating the chosen approximation. We want $T(x)$, but can only get $hat{A}(hat{x})$. The error is can be expressed as $T(x) - hat{A}(hat{x}) = T(x) - T(hat{x}) + T(hat{x}) - A(hat{x}) + A(hat{x}) - hat{A}(hat{x})$
            $endgroup$
            – Carl Christian
            Oct 24 '17 at 21:55













          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2487470%2favoiding-numerical-cancellation-question-for-sin-x-sin-y-for-x-approx-y%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          3 Answers
          3






          active

          oldest

          votes








          3 Answers
          3






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          3












          $begingroup$

          It really depends on how exactly $x$ and $y$ are given. Frequently what we really have is not $x$ and $y$ but $x$ and $y-x$. We can then gain precision if we can analytically write the subtraction of nearly equal numbers exclusively in terms of $y-x$, like you did here, because we are already given $y-x$ accurately (more accurately than we would get if we computed it directly).



          There are actually some standard library functions out there specialized to this exact purpose, for example "log1p", which is used to compute $log(1+x)$ for small $x$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            So assuming $x=y+h$ it would become $2cos (x+frac{h}{2})sin(frac{h}{2})$. Can it be assumed that $y-x$ is given though? As a beginner in numerics I am never sure when the exact values are available.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 12:46












          • $begingroup$
            @B.Swan Right. Then at least the $sin(h/2)$ can be accurately computed if $h$ is already given accurately.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:47










          • $begingroup$
            @B.Swan It really depends on how the problem is given, as I said. But if $y-x$ is not already given accurately, then this trick gives a modest benefit at best (because you will already commit severe error inside the sine). Let me put it another way: this approach does not really help you define a function delta_sin(x,y). It helps you define a function delta_sin(x,h). If your caller needs delta_sin(x,y), then you will need to do something else to help them. But if they need delta_sin(x,h), then you can help them. Does that make sense?
            $endgroup$
            – Ian
            Oct 24 '17 at 12:48


















          3












          $begingroup$

          It really depends on how exactly $x$ and $y$ are given. Frequently what we really have is not $x$ and $y$ but $x$ and $y-x$. We can then gain precision if we can analytically write the subtraction of nearly equal numbers exclusively in terms of $y-x$, like you did here, because we are already given $y-x$ accurately (more accurately than we would get if we computed it directly).



          There are actually some standard library functions out there specialized to this exact purpose, for example "log1p", which is used to compute $log(1+x)$ for small $x$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            So assuming $x=y+h$ it would become $2cos (x+frac{h}{2})sin(frac{h}{2})$. Can it be assumed that $y-x$ is given though? As a beginner in numerics I am never sure when the exact values are available.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 12:46












          • $begingroup$
            @B.Swan Right. Then at least the $sin(h/2)$ can be accurately computed if $h$ is already given accurately.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:47










          • $begingroup$
            @B.Swan It really depends on how the problem is given, as I said. But if $y-x$ is not already given accurately, then this trick gives a modest benefit at best (because you will already commit severe error inside the sine). Let me put it another way: this approach does not really help you define a function delta_sin(x,y). It helps you define a function delta_sin(x,h). If your caller needs delta_sin(x,y), then you will need to do something else to help them. But if they need delta_sin(x,h), then you can help them. Does that make sense?
            $endgroup$
            – Ian
            Oct 24 '17 at 12:48
















          3












          3








          3





          $begingroup$

          It really depends on how exactly $x$ and $y$ are given. Frequently what we really have is not $x$ and $y$ but $x$ and $y-x$. We can then gain precision if we can analytically write the subtraction of nearly equal numbers exclusively in terms of $y-x$, like you did here, because we are already given $y-x$ accurately (more accurately than we would get if we computed it directly).



          There are actually some standard library functions out there specialized to this exact purpose, for example "log1p", which is used to compute $log(1+x)$ for small $x$.






          share|cite|improve this answer









          $endgroup$



          It really depends on how exactly $x$ and $y$ are given. Frequently what we really have is not $x$ and $y$ but $x$ and $y-x$. We can then gain precision if we can analytically write the subtraction of nearly equal numbers exclusively in terms of $y-x$, like you did here, because we are already given $y-x$ accurately (more accurately than we would get if we computed it directly).



          There are actually some standard library functions out there specialized to this exact purpose, for example "log1p", which is used to compute $log(1+x)$ for small $x$.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Oct 24 '17 at 12:31









          IanIan

          68.4k25388




          68.4k25388












          • $begingroup$
            So assuming $x=y+h$ it would become $2cos (x+frac{h}{2})sin(frac{h}{2})$. Can it be assumed that $y-x$ is given though? As a beginner in numerics I am never sure when the exact values are available.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 12:46












          • $begingroup$
            @B.Swan Right. Then at least the $sin(h/2)$ can be accurately computed if $h$ is already given accurately.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:47










          • $begingroup$
            @B.Swan It really depends on how the problem is given, as I said. But if $y-x$ is not already given accurately, then this trick gives a modest benefit at best (because you will already commit severe error inside the sine). Let me put it another way: this approach does not really help you define a function delta_sin(x,y). It helps you define a function delta_sin(x,h). If your caller needs delta_sin(x,y), then you will need to do something else to help them. But if they need delta_sin(x,h), then you can help them. Does that make sense?
            $endgroup$
            – Ian
            Oct 24 '17 at 12:48




















          • $begingroup$
            So assuming $x=y+h$ it would become $2cos (x+frac{h}{2})sin(frac{h}{2})$. Can it be assumed that $y-x$ is given though? As a beginner in numerics I am never sure when the exact values are available.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 12:46












          • $begingroup$
            @B.Swan Right. Then at least the $sin(h/2)$ can be accurately computed if $h$ is already given accurately.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:47










          • $begingroup$
            @B.Swan It really depends on how the problem is given, as I said. But if $y-x$ is not already given accurately, then this trick gives a modest benefit at best (because you will already commit severe error inside the sine). Let me put it another way: this approach does not really help you define a function delta_sin(x,y). It helps you define a function delta_sin(x,h). If your caller needs delta_sin(x,y), then you will need to do something else to help them. But if they need delta_sin(x,h), then you can help them. Does that make sense?
            $endgroup$
            – Ian
            Oct 24 '17 at 12:48


















          $begingroup$
          So assuming $x=y+h$ it would become $2cos (x+frac{h}{2})sin(frac{h}{2})$. Can it be assumed that $y-x$ is given though? As a beginner in numerics I am never sure when the exact values are available.
          $endgroup$
          – B.Swan
          Oct 24 '17 at 12:46






          $begingroup$
          So assuming $x=y+h$ it would become $2cos (x+frac{h}{2})sin(frac{h}{2})$. Can it be assumed that $y-x$ is given though? As a beginner in numerics I am never sure when the exact values are available.
          $endgroup$
          – B.Swan
          Oct 24 '17 at 12:46














          $begingroup$
          @B.Swan Right. Then at least the $sin(h/2)$ can be accurately computed if $h$ is already given accurately.
          $endgroup$
          – Ian
          Oct 24 '17 at 12:47




          $begingroup$
          @B.Swan Right. Then at least the $sin(h/2)$ can be accurately computed if $h$ is already given accurately.
          $endgroup$
          – Ian
          Oct 24 '17 at 12:47












          $begingroup$
          @B.Swan It really depends on how the problem is given, as I said. But if $y-x$ is not already given accurately, then this trick gives a modest benefit at best (because you will already commit severe error inside the sine). Let me put it another way: this approach does not really help you define a function delta_sin(x,y). It helps you define a function delta_sin(x,h). If your caller needs delta_sin(x,y), then you will need to do something else to help them. But if they need delta_sin(x,h), then you can help them. Does that make sense?
          $endgroup$
          – Ian
          Oct 24 '17 at 12:48






          $begingroup$
          @B.Swan It really depends on how the problem is given, as I said. But if $y-x$ is not already given accurately, then this trick gives a modest benefit at best (because you will already commit severe error inside the sine). Let me put it another way: this approach does not really help you define a function delta_sin(x,y). It helps you define a function delta_sin(x,h). If your caller needs delta_sin(x,y), then you will need to do something else to help them. But if they need delta_sin(x,h), then you can help them. Does that make sense?
          $endgroup$
          – Ian
          Oct 24 '17 at 12:48













          2












          $begingroup$

          One possibility: To determine the sine you use the Maclaurin series, and the faster this converges the fewer ill-comditioned operations you need to perform for the factor $sin(frac{x-y}{2})$. The small argument in that factor gets you there.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Unless $x,y$ are large and their sines are nearly equal, I don't think this has anything to do with it.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:33












          • $begingroup$
            If $x$ and $y$ are large the Maclaurin series themselves can become ill-conditioned. You can use periodicity and symmetry to reduce them if you have a sufficiently accurate rendering of $pi$.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:37










          • $begingroup$
            That periodicity trick is actually very badly conditioned, cf. math.stackexchange.com/questions/1561713/… What I'm really saying here is that if $x,y$ are, say, confined to $[-3pi/4,3pi/4]$, the series themselves are not badly conditioned but the difference is badly conditioned. In these cases accelerating the convergence of the series does you rather little good. (Indeed, you actually pass the problem off to the cosine function without making it any better.)
            $endgroup$
            – Ian
            Oct 24 '17 at 12:40












          • $begingroup$
            Periodicity and symmetry. You can reduce arguments to an absolute value less than or equal to $pi/4$, actually.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:42






          • 1




            $begingroup$
            Confining to $[-pi/4,pi/4]$ doesn't improve matters much more than confining to $[-3pi/4,3pi/4]$. Indeed the problem with keeping the series well-conditioned is to avoid being close to a zero of sine other than $x=0$. But again, provided you stick to a range where the series themselves are well-conditioned (or you use another method entirely to compute the trig functions themselves), the problem of computing the difference persists.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:43


















          2












          $begingroup$

          One possibility: To determine the sine you use the Maclaurin series, and the faster this converges the fewer ill-comditioned operations you need to perform for the factor $sin(frac{x-y}{2})$. The small argument in that factor gets you there.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Unless $x,y$ are large and their sines are nearly equal, I don't think this has anything to do with it.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:33












          • $begingroup$
            If $x$ and $y$ are large the Maclaurin series themselves can become ill-conditioned. You can use periodicity and symmetry to reduce them if you have a sufficiently accurate rendering of $pi$.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:37










          • $begingroup$
            That periodicity trick is actually very badly conditioned, cf. math.stackexchange.com/questions/1561713/… What I'm really saying here is that if $x,y$ are, say, confined to $[-3pi/4,3pi/4]$, the series themselves are not badly conditioned but the difference is badly conditioned. In these cases accelerating the convergence of the series does you rather little good. (Indeed, you actually pass the problem off to the cosine function without making it any better.)
            $endgroup$
            – Ian
            Oct 24 '17 at 12:40












          • $begingroup$
            Periodicity and symmetry. You can reduce arguments to an absolute value less than or equal to $pi/4$, actually.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:42






          • 1




            $begingroup$
            Confining to $[-pi/4,pi/4]$ doesn't improve matters much more than confining to $[-3pi/4,3pi/4]$. Indeed the problem with keeping the series well-conditioned is to avoid being close to a zero of sine other than $x=0$. But again, provided you stick to a range where the series themselves are well-conditioned (or you use another method entirely to compute the trig functions themselves), the problem of computing the difference persists.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:43
















          2












          2








          2





          $begingroup$

          One possibility: To determine the sine you use the Maclaurin series, and the faster this converges the fewer ill-comditioned operations you need to perform for the factor $sin(frac{x-y}{2})$. The small argument in that factor gets you there.






          share|cite|improve this answer









          $endgroup$



          One possibility: To determine the sine you use the Maclaurin series, and the faster this converges the fewer ill-comditioned operations you need to perform for the factor $sin(frac{x-y}{2})$. The small argument in that factor gets you there.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Oct 24 '17 at 12:31









          Oscar LanziOscar Lanzi

          12.9k12136




          12.9k12136












          • $begingroup$
            Unless $x,y$ are large and their sines are nearly equal, I don't think this has anything to do with it.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:33












          • $begingroup$
            If $x$ and $y$ are large the Maclaurin series themselves can become ill-conditioned. You can use periodicity and symmetry to reduce them if you have a sufficiently accurate rendering of $pi$.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:37










          • $begingroup$
            That periodicity trick is actually very badly conditioned, cf. math.stackexchange.com/questions/1561713/… What I'm really saying here is that if $x,y$ are, say, confined to $[-3pi/4,3pi/4]$, the series themselves are not badly conditioned but the difference is badly conditioned. In these cases accelerating the convergence of the series does you rather little good. (Indeed, you actually pass the problem off to the cosine function without making it any better.)
            $endgroup$
            – Ian
            Oct 24 '17 at 12:40












          • $begingroup$
            Periodicity and symmetry. You can reduce arguments to an absolute value less than or equal to $pi/4$, actually.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:42






          • 1




            $begingroup$
            Confining to $[-pi/4,pi/4]$ doesn't improve matters much more than confining to $[-3pi/4,3pi/4]$. Indeed the problem with keeping the series well-conditioned is to avoid being close to a zero of sine other than $x=0$. But again, provided you stick to a range where the series themselves are well-conditioned (or you use another method entirely to compute the trig functions themselves), the problem of computing the difference persists.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:43




















          • $begingroup$
            Unless $x,y$ are large and their sines are nearly equal, I don't think this has anything to do with it.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:33












          • $begingroup$
            If $x$ and $y$ are large the Maclaurin series themselves can become ill-conditioned. You can use periodicity and symmetry to reduce them if you have a sufficiently accurate rendering of $pi$.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:37










          • $begingroup$
            That periodicity trick is actually very badly conditioned, cf. math.stackexchange.com/questions/1561713/… What I'm really saying here is that if $x,y$ are, say, confined to $[-3pi/4,3pi/4]$, the series themselves are not badly conditioned but the difference is badly conditioned. In these cases accelerating the convergence of the series does you rather little good. (Indeed, you actually pass the problem off to the cosine function without making it any better.)
            $endgroup$
            – Ian
            Oct 24 '17 at 12:40












          • $begingroup$
            Periodicity and symmetry. You can reduce arguments to an absolute value less than or equal to $pi/4$, actually.
            $endgroup$
            – Oscar Lanzi
            Oct 24 '17 at 12:42






          • 1




            $begingroup$
            Confining to $[-pi/4,pi/4]$ doesn't improve matters much more than confining to $[-3pi/4,3pi/4]$. Indeed the problem with keeping the series well-conditioned is to avoid being close to a zero of sine other than $x=0$. But again, provided you stick to a range where the series themselves are well-conditioned (or you use another method entirely to compute the trig functions themselves), the problem of computing the difference persists.
            $endgroup$
            – Ian
            Oct 24 '17 at 12:43


















          $begingroup$
          Unless $x,y$ are large and their sines are nearly equal, I don't think this has anything to do with it.
          $endgroup$
          – Ian
          Oct 24 '17 at 12:33






          $begingroup$
          Unless $x,y$ are large and their sines are nearly equal, I don't think this has anything to do with it.
          $endgroup$
          – Ian
          Oct 24 '17 at 12:33














          $begingroup$
          If $x$ and $y$ are large the Maclaurin series themselves can become ill-conditioned. You can use periodicity and symmetry to reduce them if you have a sufficiently accurate rendering of $pi$.
          $endgroup$
          – Oscar Lanzi
          Oct 24 '17 at 12:37




          $begingroup$
          If $x$ and $y$ are large the Maclaurin series themselves can become ill-conditioned. You can use periodicity and symmetry to reduce them if you have a sufficiently accurate rendering of $pi$.
          $endgroup$
          – Oscar Lanzi
          Oct 24 '17 at 12:37












          $begingroup$
          That periodicity trick is actually very badly conditioned, cf. math.stackexchange.com/questions/1561713/… What I'm really saying here is that if $x,y$ are, say, confined to $[-3pi/4,3pi/4]$, the series themselves are not badly conditioned but the difference is badly conditioned. In these cases accelerating the convergence of the series does you rather little good. (Indeed, you actually pass the problem off to the cosine function without making it any better.)
          $endgroup$
          – Ian
          Oct 24 '17 at 12:40






          $begingroup$
          That periodicity trick is actually very badly conditioned, cf. math.stackexchange.com/questions/1561713/… What I'm really saying here is that if $x,y$ are, say, confined to $[-3pi/4,3pi/4]$, the series themselves are not badly conditioned but the difference is badly conditioned. In these cases accelerating the convergence of the series does you rather little good. (Indeed, you actually pass the problem off to the cosine function without making it any better.)
          $endgroup$
          – Ian
          Oct 24 '17 at 12:40














          $begingroup$
          Periodicity and symmetry. You can reduce arguments to an absolute value less than or equal to $pi/4$, actually.
          $endgroup$
          – Oscar Lanzi
          Oct 24 '17 at 12:42




          $begingroup$
          Periodicity and symmetry. You can reduce arguments to an absolute value less than or equal to $pi/4$, actually.
          $endgroup$
          – Oscar Lanzi
          Oct 24 '17 at 12:42




          1




          1




          $begingroup$
          Confining to $[-pi/4,pi/4]$ doesn't improve matters much more than confining to $[-3pi/4,3pi/4]$. Indeed the problem with keeping the series well-conditioned is to avoid being close to a zero of sine other than $x=0$. But again, provided you stick to a range where the series themselves are well-conditioned (or you use another method entirely to compute the trig functions themselves), the problem of computing the difference persists.
          $endgroup$
          – Ian
          Oct 24 '17 at 12:43






          $begingroup$
          Confining to $[-pi/4,pi/4]$ doesn't improve matters much more than confining to $[-3pi/4,3pi/4]$. Indeed the problem with keeping the series well-conditioned is to avoid being close to a zero of sine other than $x=0$. But again, provided you stick to a range where the series themselves are well-conditioned (or you use another method entirely to compute the trig functions themselves), the problem of computing the difference persists.
          $endgroup$
          – Ian
          Oct 24 '17 at 12:43













          2












          $begingroup$

          While a function $f : A to B$ is a triple, consisting of a domain $A$, a codomain $B$ and a rule which assigns to each element $x in A$ exactly one element $f(x) in B$, too many focus exclusively on rule and forget to carefully specify the domain and the codomain.



          In this case the function in question is $f : mathcal F times mathcal F rightarrow mathbb R$, where $$f(x,y) = sin(x) - sin(y),$$
          and $mathcal F$ is the set of machine numbers, say, double precision floating point numbers. I will explain below why I know that this is the right domain.



          The problem of computing a difference $d = a - b$ between two real numbers $a$ and $b$ is ill conditioned when $a approx b$. Indeed if $hat{a} = a(1+delta_a)$ and $hat{b} = b(1 + delta_b)$ are the best available approximations of $a$ and $b$, then we can not hope to compute a better approximation of $d$ than $hat{d} = hat{a} - hat{b}$. The relative error $$r = frac{d - hat{d}}{d},$$
          satisfies the bound
          $$ |r| leq frac{|a| + |b|}{|a-b|} max{|delta_a|,|delta_b|}$$
          When $a approx b$, we can not guarantee that the difference $d$ is computed with a small relative error. In practice, the relative error is large. We say that the subtraction magnifies the error committed when replacing $a$ with $hat{a}$ and $b$ with $hat{b}$.



          In your situation $a = sin(x)$ and $b = sin(y)$. Errors are committed when computing the sine function. No matter how skilled we are, the best we can hope for is to obtain the floating point representation of $a$, i.e. $text{fl}(a) = sin(x)(1 + delta)$, where $|delta| leq u$ and $u$ is the unit roundoff. Why? The computer may well have extra wide registers for internal use, but eventually, the result has to be rounded to, say, double precision, so that the result can be stored in memory. It follows, that if we compute $f$ using the definition and $x approx y$, then the computed result will have relative error which is many times the unit roundoff.



          In order to avoid the offending subtraction, we turn to the function $g : mathcal F times mathcal F to mathbb R$ given by
          $$ g(x,y) = 2 cos left( frac{x+y}{2} right) sin left(frac{x-y}{2} right)$$
          In absence of rounding errors $f(x,y) = g(x,y)$, but in floating point arithmetic they behave quite differently. The subtraction of two floating point numbers $x$ and $y$ is perfectly safe. In fact, if $y/2 leq x leq 2y$, then subtraction is done with one guard digit, then $x-y$ is computed exactly.



          We are not entirely in the clear, as $x + y$ need not be a floating point number, but is computed with a relative error bounded by the unit roundoff. In the unfortunate event that $(x+y)/2 approx (frac{1}{2} + k) pi$, where $k in mathbb Z$ the calculation of $g$ suffers from the fact that cosine is ill conditioned near a root.



          Using a conditional to pick the correct expressions allows us to cover a larger subset of the domain.






          In general, why $mathcal F$ rather than $mathbb R$? Consider the simpler problem of computing $f : mathbb R rightarrow mathbb R$. In general, you do not know the exact value of $x$, and the best you can hope for is $hat{x}$, the floating point represen-tation of $x$. The impact of this error is controlled by the condition number of $f$. There is nothing you can do about large condition numbers, except switch to better hardware of simulate a smaller unit roundoff $u'$. This leaves you with the task of computing $f(hat{x})$, where $hat{x} in mathcal F$ is a machine number. That is why $mathcal F$ is the natural domain during this the second stage of designing an algorithm for computing approximations of $f : mathbb R to mathbb R$.




          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            The book I am using ("Algorithmic mathematics" by Vygen/Hougardy, I think it's only available in German) actually does define a computation problem as a triple with two sets $A, B$ and a relation $R$, which is a subset of $A times B$, and in numerical computational problems it actually sets $A,B$ as subsets of $mathbb{R}$, not necessarily as the machine numbers. It does that in the definition of the condition, too. I will take my time to think about your answer though and discuss it with my professor, thank you.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 20:38












          • $begingroup$
            @B.Swan Conditioning should certainly be considered for subsets of $mathbb R$. Once this first stage is complete, one must consider how the target function is evaluated for floating point numbers. This is frequently difficult as seen above, and a rewrite or an approximation must be sought. Then comes the third and final stage which consist evalutating the chosen approximation. We want $T(x)$, but can only get $hat{A}(hat{x})$. The error is can be expressed as $T(x) - hat{A}(hat{x}) = T(x) - T(hat{x}) + T(hat{x}) - A(hat{x}) + A(hat{x}) - hat{A}(hat{x})$
            $endgroup$
            – Carl Christian
            Oct 24 '17 at 21:55


















          2












          $begingroup$

          While a function $f : A to B$ is a triple, consisting of a domain $A$, a codomain $B$ and a rule which assigns to each element $x in A$ exactly one element $f(x) in B$, too many focus exclusively on rule and forget to carefully specify the domain and the codomain.



          In this case the function in question is $f : mathcal F times mathcal F rightarrow mathbb R$, where $$f(x,y) = sin(x) - sin(y),$$
          and $mathcal F$ is the set of machine numbers, say, double precision floating point numbers. I will explain below why I know that this is the right domain.



          The problem of computing a difference $d = a - b$ between two real numbers $a$ and $b$ is ill conditioned when $a approx b$. Indeed if $hat{a} = a(1+delta_a)$ and $hat{b} = b(1 + delta_b)$ are the best available approximations of $a$ and $b$, then we can not hope to compute a better approximation of $d$ than $hat{d} = hat{a} - hat{b}$. The relative error $$r = frac{d - hat{d}}{d},$$
          satisfies the bound
          $$ |r| leq frac{|a| + |b|}{|a-b|} max{|delta_a|,|delta_b|}$$
          When $a approx b$, we can not guarantee that the difference $d$ is computed with a small relative error. In practice, the relative error is large. We say that the subtraction magnifies the error committed when replacing $a$ with $hat{a}$ and $b$ with $hat{b}$.



          In your situation $a = sin(x)$ and $b = sin(y)$. Errors are committed when computing the sine function. No matter how skilled we are, the best we can hope for is to obtain the floating point representation of $a$, i.e. $text{fl}(a) = sin(x)(1 + delta)$, where $|delta| leq u$ and $u$ is the unit roundoff. Why? The computer may well have extra wide registers for internal use, but eventually, the result has to be rounded to, say, double precision, so that the result can be stored in memory. It follows, that if we compute $f$ using the definition and $x approx y$, then the computed result will have relative error which is many times the unit roundoff.



          In order to avoid the offending subtraction, we turn to the function $g : mathcal F times mathcal F to mathbb R$ given by
          $$ g(x,y) = 2 cos left( frac{x+y}{2} right) sin left(frac{x-y}{2} right)$$
          In absence of rounding errors $f(x,y) = g(x,y)$, but in floating point arithmetic they behave quite differently. The subtraction of two floating point numbers $x$ and $y$ is perfectly safe. In fact, if $y/2 leq x leq 2y$, then subtraction is done with one guard digit, then $x-y$ is computed exactly.



          We are not entirely in the clear, as $x + y$ need not be a floating point number, but is computed with a relative error bounded by the unit roundoff. In the unfortunate event that $(x+y)/2 approx (frac{1}{2} + k) pi$, where $k in mathbb Z$ the calculation of $g$ suffers from the fact that cosine is ill conditioned near a root.



          Using a conditional to pick the correct expressions allows us to cover a larger subset of the domain.






          In general, why $mathcal F$ rather than $mathbb R$? Consider the simpler problem of computing $f : mathbb R rightarrow mathbb R$. In general, you do not know the exact value of $x$, and the best you can hope for is $hat{x}$, the floating point represen-tation of $x$. The impact of this error is controlled by the condition number of $f$. There is nothing you can do about large condition numbers, except switch to better hardware of simulate a smaller unit roundoff $u'$. This leaves you with the task of computing $f(hat{x})$, where $hat{x} in mathcal F$ is a machine number. That is why $mathcal F$ is the natural domain during this the second stage of designing an algorithm for computing approximations of $f : mathbb R to mathbb R$.




          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            The book I am using ("Algorithmic mathematics" by Vygen/Hougardy, I think it's only available in German) actually does define a computation problem as a triple with two sets $A, B$ and a relation $R$, which is a subset of $A times B$, and in numerical computational problems it actually sets $A,B$ as subsets of $mathbb{R}$, not necessarily as the machine numbers. It does that in the definition of the condition, too. I will take my time to think about your answer though and discuss it with my professor, thank you.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 20:38












          • $begingroup$
            @B.Swan Conditioning should certainly be considered for subsets of $mathbb R$. Once this first stage is complete, one must consider how the target function is evaluated for floating point numbers. This is frequently difficult as seen above, and a rewrite or an approximation must be sought. Then comes the third and final stage which consist evalutating the chosen approximation. We want $T(x)$, but can only get $hat{A}(hat{x})$. The error is can be expressed as $T(x) - hat{A}(hat{x}) = T(x) - T(hat{x}) + T(hat{x}) - A(hat{x}) + A(hat{x}) - hat{A}(hat{x})$
            $endgroup$
            – Carl Christian
            Oct 24 '17 at 21:55
















          2












          2








          2





          $begingroup$

          While a function $f : A to B$ is a triple, consisting of a domain $A$, a codomain $B$ and a rule which assigns to each element $x in A$ exactly one element $f(x) in B$, too many focus exclusively on rule and forget to carefully specify the domain and the codomain.



          In this case the function in question is $f : mathcal F times mathcal F rightarrow mathbb R$, where $$f(x,y) = sin(x) - sin(y),$$
          and $mathcal F$ is the set of machine numbers, say, double precision floating point numbers. I will explain below why I know that this is the right domain.



          The problem of computing a difference $d = a - b$ between two real numbers $a$ and $b$ is ill conditioned when $a approx b$. Indeed if $hat{a} = a(1+delta_a)$ and $hat{b} = b(1 + delta_b)$ are the best available approximations of $a$ and $b$, then we can not hope to compute a better approximation of $d$ than $hat{d} = hat{a} - hat{b}$. The relative error $$r = frac{d - hat{d}}{d},$$
          satisfies the bound
          $$ |r| leq frac{|a| + |b|}{|a-b|} max{|delta_a|,|delta_b|}$$
          When $a approx b$, we can not guarantee that the difference $d$ is computed with a small relative error. In practice, the relative error is large. We say that the subtraction magnifies the error committed when replacing $a$ with $hat{a}$ and $b$ with $hat{b}$.



          In your situation $a = sin(x)$ and $b = sin(y)$. Errors are committed when computing the sine function. No matter how skilled we are, the best we can hope for is to obtain the floating point representation of $a$, i.e. $text{fl}(a) = sin(x)(1 + delta)$, where $|delta| leq u$ and $u$ is the unit roundoff. Why? The computer may well have extra wide registers for internal use, but eventually, the result has to be rounded to, say, double precision, so that the result can be stored in memory. It follows, that if we compute $f$ using the definition and $x approx y$, then the computed result will have relative error which is many times the unit roundoff.



          In order to avoid the offending subtraction, we turn to the function $g : mathcal F times mathcal F to mathbb R$ given by
          $$ g(x,y) = 2 cos left( frac{x+y}{2} right) sin left(frac{x-y}{2} right)$$
          In absence of rounding errors $f(x,y) = g(x,y)$, but in floating point arithmetic they behave quite differently. The subtraction of two floating point numbers $x$ and $y$ is perfectly safe. In fact, if $y/2 leq x leq 2y$, then subtraction is done with one guard digit, then $x-y$ is computed exactly.



          We are not entirely in the clear, as $x + y$ need not be a floating point number, but is computed with a relative error bounded by the unit roundoff. In the unfortunate event that $(x+y)/2 approx (frac{1}{2} + k) pi$, where $k in mathbb Z$ the calculation of $g$ suffers from the fact that cosine is ill conditioned near a root.



          Using a conditional to pick the correct expressions allows us to cover a larger subset of the domain.






          In general, why $mathcal F$ rather than $mathbb R$? Consider the simpler problem of computing $f : mathbb R rightarrow mathbb R$. In general, you do not know the exact value of $x$, and the best you can hope for is $hat{x}$, the floating point represen-tation of $x$. The impact of this error is controlled by the condition number of $f$. There is nothing you can do about large condition numbers, except switch to better hardware of simulate a smaller unit roundoff $u'$. This leaves you with the task of computing $f(hat{x})$, where $hat{x} in mathcal F$ is a machine number. That is why $mathcal F$ is the natural domain during this the second stage of designing an algorithm for computing approximations of $f : mathbb R to mathbb R$.




          share|cite|improve this answer











          $endgroup$



          While a function $f : A to B$ is a triple, consisting of a domain $A$, a codomain $B$ and a rule which assigns to each element $x in A$ exactly one element $f(x) in B$, too many focus exclusively on rule and forget to carefully specify the domain and the codomain.



          In this case the function in question is $f : mathcal F times mathcal F rightarrow mathbb R$, where $$f(x,y) = sin(x) - sin(y),$$
          and $mathcal F$ is the set of machine numbers, say, double precision floating point numbers. I will explain below why I know that this is the right domain.



          The problem of computing a difference $d = a - b$ between two real numbers $a$ and $b$ is ill conditioned when $a approx b$. Indeed if $hat{a} = a(1+delta_a)$ and $hat{b} = b(1 + delta_b)$ are the best available approximations of $a$ and $b$, then we can not hope to compute a better approximation of $d$ than $hat{d} = hat{a} - hat{b}$. The relative error $$r = frac{d - hat{d}}{d},$$
          satisfies the bound
          $$ |r| leq frac{|a| + |b|}{|a-b|} max{|delta_a|,|delta_b|}$$
          When $a approx b$, we can not guarantee that the difference $d$ is computed with a small relative error. In practice, the relative error is large. We say that the subtraction magnifies the error committed when replacing $a$ with $hat{a}$ and $b$ with $hat{b}$.



          In your situation $a = sin(x)$ and $b = sin(y)$. Errors are committed when computing the sine function. No matter how skilled we are, the best we can hope for is to obtain the floating point representation of $a$, i.e. $text{fl}(a) = sin(x)(1 + delta)$, where $|delta| leq u$ and $u$ is the unit roundoff. Why? The computer may well have extra wide registers for internal use, but eventually, the result has to be rounded to, say, double precision, so that the result can be stored in memory. It follows, that if we compute $f$ using the definition and $x approx y$, then the computed result will have relative error which is many times the unit roundoff.



          In order to avoid the offending subtraction, we turn to the function $g : mathcal F times mathcal F to mathbb R$ given by
          $$ g(x,y) = 2 cos left( frac{x+y}{2} right) sin left(frac{x-y}{2} right)$$
          In absence of rounding errors $f(x,y) = g(x,y)$, but in floating point arithmetic they behave quite differently. The subtraction of two floating point numbers $x$ and $y$ is perfectly safe. In fact, if $y/2 leq x leq 2y$, then subtraction is done with one guard digit, then $x-y$ is computed exactly.



          We are not entirely in the clear, as $x + y$ need not be a floating point number, but is computed with a relative error bounded by the unit roundoff. In the unfortunate event that $(x+y)/2 approx (frac{1}{2} + k) pi$, where $k in mathbb Z$ the calculation of $g$ suffers from the fact that cosine is ill conditioned near a root.



          Using a conditional to pick the correct expressions allows us to cover a larger subset of the domain.






          In general, why $mathcal F$ rather than $mathbb R$? Consider the simpler problem of computing $f : mathbb R rightarrow mathbb R$. In general, you do not know the exact value of $x$, and the best you can hope for is $hat{x}$, the floating point represen-tation of $x$. The impact of this error is controlled by the condition number of $f$. There is nothing you can do about large condition numbers, except switch to better hardware of simulate a smaller unit roundoff $u'$. This leaves you with the task of computing $f(hat{x})$, where $hat{x} in mathcal F$ is a machine number. That is why $mathcal F$ is the natural domain during this the second stage of designing an algorithm for computing approximations of $f : mathbb R to mathbb R$.





          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Oct 25 '17 at 14:32

























          answered Oct 24 '17 at 20:17









          Carl ChristianCarl Christian

          5,4931721




          5,4931721












          • $begingroup$
            The book I am using ("Algorithmic mathematics" by Vygen/Hougardy, I think it's only available in German) actually does define a computation problem as a triple with two sets $A, B$ and a relation $R$, which is a subset of $A times B$, and in numerical computational problems it actually sets $A,B$ as subsets of $mathbb{R}$, not necessarily as the machine numbers. It does that in the definition of the condition, too. I will take my time to think about your answer though and discuss it with my professor, thank you.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 20:38












          • $begingroup$
            @B.Swan Conditioning should certainly be considered for subsets of $mathbb R$. Once this first stage is complete, one must consider how the target function is evaluated for floating point numbers. This is frequently difficult as seen above, and a rewrite or an approximation must be sought. Then comes the third and final stage which consist evalutating the chosen approximation. We want $T(x)$, but can only get $hat{A}(hat{x})$. The error is can be expressed as $T(x) - hat{A}(hat{x}) = T(x) - T(hat{x}) + T(hat{x}) - A(hat{x}) + A(hat{x}) - hat{A}(hat{x})$
            $endgroup$
            – Carl Christian
            Oct 24 '17 at 21:55




















          • $begingroup$
            The book I am using ("Algorithmic mathematics" by Vygen/Hougardy, I think it's only available in German) actually does define a computation problem as a triple with two sets $A, B$ and a relation $R$, which is a subset of $A times B$, and in numerical computational problems it actually sets $A,B$ as subsets of $mathbb{R}$, not necessarily as the machine numbers. It does that in the definition of the condition, too. I will take my time to think about your answer though and discuss it with my professor, thank you.
            $endgroup$
            – B.Swan
            Oct 24 '17 at 20:38












          • $begingroup$
            @B.Swan Conditioning should certainly be considered for subsets of $mathbb R$. Once this first stage is complete, one must consider how the target function is evaluated for floating point numbers. This is frequently difficult as seen above, and a rewrite or an approximation must be sought. Then comes the third and final stage which consist evalutating the chosen approximation. We want $T(x)$, but can only get $hat{A}(hat{x})$. The error is can be expressed as $T(x) - hat{A}(hat{x}) = T(x) - T(hat{x}) + T(hat{x}) - A(hat{x}) + A(hat{x}) - hat{A}(hat{x})$
            $endgroup$
            – Carl Christian
            Oct 24 '17 at 21:55


















          $begingroup$
          The book I am using ("Algorithmic mathematics" by Vygen/Hougardy, I think it's only available in German) actually does define a computation problem as a triple with two sets $A, B$ and a relation $R$, which is a subset of $A times B$, and in numerical computational problems it actually sets $A,B$ as subsets of $mathbb{R}$, not necessarily as the machine numbers. It does that in the definition of the condition, too. I will take my time to think about your answer though and discuss it with my professor, thank you.
          $endgroup$
          – B.Swan
          Oct 24 '17 at 20:38






          $begingroup$
          The book I am using ("Algorithmic mathematics" by Vygen/Hougardy, I think it's only available in German) actually does define a computation problem as a triple with two sets $A, B$ and a relation $R$, which is a subset of $A times B$, and in numerical computational problems it actually sets $A,B$ as subsets of $mathbb{R}$, not necessarily as the machine numbers. It does that in the definition of the condition, too. I will take my time to think about your answer though and discuss it with my professor, thank you.
          $endgroup$
          – B.Swan
          Oct 24 '17 at 20:38














          $begingroup$
          @B.Swan Conditioning should certainly be considered for subsets of $mathbb R$. Once this first stage is complete, one must consider how the target function is evaluated for floating point numbers. This is frequently difficult as seen above, and a rewrite or an approximation must be sought. Then comes the third and final stage which consist evalutating the chosen approximation. We want $T(x)$, but can only get $hat{A}(hat{x})$. The error is can be expressed as $T(x) - hat{A}(hat{x}) = T(x) - T(hat{x}) + T(hat{x}) - A(hat{x}) + A(hat{x}) - hat{A}(hat{x})$
          $endgroup$
          – Carl Christian
          Oct 24 '17 at 21:55






          $begingroup$
          @B.Swan Conditioning should certainly be considered for subsets of $mathbb R$. Once this first stage is complete, one must consider how the target function is evaluated for floating point numbers. This is frequently difficult as seen above, and a rewrite or an approximation must be sought. Then comes the third and final stage which consist evalutating the chosen approximation. We want $T(x)$, but can only get $hat{A}(hat{x})$. The error is can be expressed as $T(x) - hat{A}(hat{x}) = T(x) - T(hat{x}) + T(hat{x}) - A(hat{x}) + A(hat{x}) - hat{A}(hat{x})$
          $endgroup$
          – Carl Christian
          Oct 24 '17 at 21:55




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2487470%2favoiding-numerical-cancellation-question-for-sin-x-sin-y-for-x-approx-y%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

          Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

          A Topological Invariant for $pi_3(U(n))$