Loss of significance in a complicated recurrence in R, related to Gaussian quadrature
I am not a programmer (this is my first post here), and don't have much experience with floating point arithmetic. I apologize if I missed something obvious.
I have been trying to find the parameters for Gaussian quadrature with custom weight function, using the general method described for example here. The method works, as checked for small number of points, when the parameters can be found by hand.
However, for large number of quadrature points it makes sense to compute the parameters numerically. The moments can be expressed through a hypergeometric function, which is given by the quickly converging series, which I am using here.
My algorithm for computing the necessary parameters an
and bn
involves finding the coefficients of the polynomials explicitly and using the formulas provided in the reference. In the end we have a complicated recurrence, which involves quite a few additions, subtractions, multiplications and divisions.
The problem is: I am pretty certain that in my case all
an=0.5
exactly. But the algorithm I have made inR
quickly loses the digits, giving0.4999999981034791707302
instead on the 5th step. What can I change in the algorithm to avoid this problem?
Here's the code:
#Moments of sin(pi x) on [0,1] (hypergeometric function)
FIm <- function(n,N){ z <- -pi^2/4;
f <- 1;
k <- 0;
a <- (n+2)/2;
b <- 3/2;
c <- (n+4)/2;
while(k < N){f <- 1+f*z*(N-1-k+a)/(N-k)/(N-1-k+b)/(N-1-k+c);
k <- k+1}
return(f*pi/(n+2))};
#Number of quadrature points
Nq <- 5;
n <- 0:(2*Nq+1);
#Moments
mu <- FIm(n,35);
#Recurrence parameters
an <- rep(0,Nq+1);
bn <- rep(0,Nq+1);
sn <- rep(0,Nq+1);
#Initial values
sn[1] <- mu[1];
an[1] <- mu[2]/sn[1];
#Coefficients of the orthogonal polynomials
Ank <- matrix(rep(0,(Nq+1)^2), nrow = Nq+1, ncol = Nq+1, byrow=TRUE);
#Initial values
Ank[1,1] <- 1;
Ank[2,1] <- - an[1];
Ank[2,2] <- 1;
#Starting recurrence
nn <- 2;
while(nn <= Nq){#Computing the coefficients of the squared polynomial
Blj <- outer(Ank[nn,], Ank[nn,], FUN = "*");
Cj <- rep(0,2*nn-1);
j <- 1;
while(j <= nn){l <- j;
while(l <= nn){if(j==l){Cj[j+l-1] <- Cj[j+l-1]+Blj[j,l]} else{Cj[j+l-1] <- Cj[j+l-1]+2*Blj[j,l]};
l <- l+1};
j <- j+1};
#Computing the inner products and applying the recurrence relations
sn[nn] <- sum(Cj*mu[1:(2*nn-1)]);
an[nn] <- sum(Cj*mu[2:(2*nn)])/sn[nn];
bn[nn] <- sn[nn]/sn[nn-1];
k <- 1;
while(k <= nn+1){if(k>1){Ank[nn+1,k] <- Ank[nn+1,k]+Ank[nn,k-1]};
Ank[nn+1,k] <- Ank[nn+1,k]-an[nn]*Ank[nn,k]-bn[nn]*Ank[nn-1,k];
k <- k+1};
nn <- nn+1};
#Computing the coefficients of the squared polynomial
Blj <- outer(Ank[nn,], Ank[nn,], FUN = "*");
Cj <- rep(0,2*nn-1);
j <- 1;
while(j <= nn){l <- j;
while(l <= nn){if(j==l){Cj[j+l-1] <- Cj[j+l-1]+Blj[j,l]} else{Cj[j+l-1] <- Cj[j+l-1]+2*Blj[j,l]};
l <- l+1};
j <- j+1};
#Computing the inner products and applying the recurrence relations
sn[nn] <- sum(Cj*mu[1:(2*nn-1)]);
an[nn] <- sum(Cj*mu[2:(2*nn)])/sn[nn];
bn[nn] <- sn[nn]/sn[nn-1];
an
The output I get for an
is:
[1] 0.5000000000000000000000 0.5000000000000004440892 0.4999999999999593103261
[4] 0.4999999999963960495286 0.4999999998869631423482 0.4999999981034791707302
An obvious problem could be the computation of the moments, as it's done here, but increasing the number of terms N
doesn't help, and more importantly, using the exact values for the moments doesn't change the output at all:
mu[1] <- 2/pi;
mu[2] <- 1/pi;
mu[3] <- 1/pi-4/pi^3;
mu[4] <- 1/pi-6/pi^3;
mu[5] <- (48 - 12 pi^2 + pi^4)/pi^5;
mu[6] <- (120 - 20 pi^2 + pi^4)/pi^5;
mu[7] <- (-1440 + 360 pi^2 - 30 pi^4 + pi^6)/pi^7;
mu[8] <- (-5040 + 840 pi^2 - 42 pi^4 + pi^6)/pi^7;
mu[9] <- (80640 - 20160 pi^2 + 1680 pi^4 - 56 pi^6 + pi^8)/pi^9;
mu[10] <- (362880 - 60480 pi^2 + 3024 pi^4 - 72 pi^6 + pi^8)/pi^9;
mu[11] <- (-7257600 + 1814400 pi^2 - 151200 pi^4 + 5040 pi^6 - 90 pi^8 + pi^10)/pi^11;
mu[12] <- (-39916800 + 6652800 pi^2 - 332640 pi^4 + 7920 pi^6 - 110 pi^8 + pi^10)/pi^11;
Using R
for this task is my personal preference (as well as a learning opportunity), so if you think I need to use another language, I guess I will just do this in Mathematica, where the precision could be set arbitrarily high.
r floating-accuracy significant-digits
add a comment |
I am not a programmer (this is my first post here), and don't have much experience with floating point arithmetic. I apologize if I missed something obvious.
I have been trying to find the parameters for Gaussian quadrature with custom weight function, using the general method described for example here. The method works, as checked for small number of points, when the parameters can be found by hand.
However, for large number of quadrature points it makes sense to compute the parameters numerically. The moments can be expressed through a hypergeometric function, which is given by the quickly converging series, which I am using here.
My algorithm for computing the necessary parameters an
and bn
involves finding the coefficients of the polynomials explicitly and using the formulas provided in the reference. In the end we have a complicated recurrence, which involves quite a few additions, subtractions, multiplications and divisions.
The problem is: I am pretty certain that in my case all
an=0.5
exactly. But the algorithm I have made inR
quickly loses the digits, giving0.4999999981034791707302
instead on the 5th step. What can I change in the algorithm to avoid this problem?
Here's the code:
#Moments of sin(pi x) on [0,1] (hypergeometric function)
FIm <- function(n,N){ z <- -pi^2/4;
f <- 1;
k <- 0;
a <- (n+2)/2;
b <- 3/2;
c <- (n+4)/2;
while(k < N){f <- 1+f*z*(N-1-k+a)/(N-k)/(N-1-k+b)/(N-1-k+c);
k <- k+1}
return(f*pi/(n+2))};
#Number of quadrature points
Nq <- 5;
n <- 0:(2*Nq+1);
#Moments
mu <- FIm(n,35);
#Recurrence parameters
an <- rep(0,Nq+1);
bn <- rep(0,Nq+1);
sn <- rep(0,Nq+1);
#Initial values
sn[1] <- mu[1];
an[1] <- mu[2]/sn[1];
#Coefficients of the orthogonal polynomials
Ank <- matrix(rep(0,(Nq+1)^2), nrow = Nq+1, ncol = Nq+1, byrow=TRUE);
#Initial values
Ank[1,1] <- 1;
Ank[2,1] <- - an[1];
Ank[2,2] <- 1;
#Starting recurrence
nn <- 2;
while(nn <= Nq){#Computing the coefficients of the squared polynomial
Blj <- outer(Ank[nn,], Ank[nn,], FUN = "*");
Cj <- rep(0,2*nn-1);
j <- 1;
while(j <= nn){l <- j;
while(l <= nn){if(j==l){Cj[j+l-1] <- Cj[j+l-1]+Blj[j,l]} else{Cj[j+l-1] <- Cj[j+l-1]+2*Blj[j,l]};
l <- l+1};
j <- j+1};
#Computing the inner products and applying the recurrence relations
sn[nn] <- sum(Cj*mu[1:(2*nn-1)]);
an[nn] <- sum(Cj*mu[2:(2*nn)])/sn[nn];
bn[nn] <- sn[nn]/sn[nn-1];
k <- 1;
while(k <= nn+1){if(k>1){Ank[nn+1,k] <- Ank[nn+1,k]+Ank[nn,k-1]};
Ank[nn+1,k] <- Ank[nn+1,k]-an[nn]*Ank[nn,k]-bn[nn]*Ank[nn-1,k];
k <- k+1};
nn <- nn+1};
#Computing the coefficients of the squared polynomial
Blj <- outer(Ank[nn,], Ank[nn,], FUN = "*");
Cj <- rep(0,2*nn-1);
j <- 1;
while(j <= nn){l <- j;
while(l <= nn){if(j==l){Cj[j+l-1] <- Cj[j+l-1]+Blj[j,l]} else{Cj[j+l-1] <- Cj[j+l-1]+2*Blj[j,l]};
l <- l+1};
j <- j+1};
#Computing the inner products and applying the recurrence relations
sn[nn] <- sum(Cj*mu[1:(2*nn-1)]);
an[nn] <- sum(Cj*mu[2:(2*nn)])/sn[nn];
bn[nn] <- sn[nn]/sn[nn-1];
an
The output I get for an
is:
[1] 0.5000000000000000000000 0.5000000000000004440892 0.4999999999999593103261
[4] 0.4999999999963960495286 0.4999999998869631423482 0.4999999981034791707302
An obvious problem could be the computation of the moments, as it's done here, but increasing the number of terms N
doesn't help, and more importantly, using the exact values for the moments doesn't change the output at all:
mu[1] <- 2/pi;
mu[2] <- 1/pi;
mu[3] <- 1/pi-4/pi^3;
mu[4] <- 1/pi-6/pi^3;
mu[5] <- (48 - 12 pi^2 + pi^4)/pi^5;
mu[6] <- (120 - 20 pi^2 + pi^4)/pi^5;
mu[7] <- (-1440 + 360 pi^2 - 30 pi^4 + pi^6)/pi^7;
mu[8] <- (-5040 + 840 pi^2 - 42 pi^4 + pi^6)/pi^7;
mu[9] <- (80640 - 20160 pi^2 + 1680 pi^4 - 56 pi^6 + pi^8)/pi^9;
mu[10] <- (362880 - 60480 pi^2 + 3024 pi^4 - 72 pi^6 + pi^8)/pi^9;
mu[11] <- (-7257600 + 1814400 pi^2 - 151200 pi^4 + 5040 pi^6 - 90 pi^8 + pi^10)/pi^11;
mu[12] <- (-39916800 + 6652800 pi^2 - 332640 pi^4 + 7920 pi^6 - 110 pi^8 + pi^10)/pi^11;
Using R
for this task is my personal preference (as well as a learning opportunity), so if you think I need to use another language, I guess I will just do this in Mathematica, where the precision could be set arbitrarily high.
r floating-accuracy significant-digits
1
Floating-point arithmetic is generally not intended to provide exact answers as you appear to be using it. So you should expect to get approximate answers unless you take special steps to ensure the results are exact. Is there a reason you expect exact answers?
– Eric Postpischil
Nov 21 '18 at 19:19
@EricPostpischil, I don't expect exact answers, but I would like if the loss of digits was a little less pronounced, that's all. I know there are steps to ensure that (like Kahan's summation algorithm), but it doesn't seem to help in this case
– Yuriy S
Nov 22 '18 at 5:29
add a comment |
I am not a programmer (this is my first post here), and don't have much experience with floating point arithmetic. I apologize if I missed something obvious.
I have been trying to find the parameters for Gaussian quadrature with custom weight function, using the general method described for example here. The method works, as checked for small number of points, when the parameters can be found by hand.
However, for large number of quadrature points it makes sense to compute the parameters numerically. The moments can be expressed through a hypergeometric function, which is given by the quickly converging series, which I am using here.
My algorithm for computing the necessary parameters an
and bn
involves finding the coefficients of the polynomials explicitly and using the formulas provided in the reference. In the end we have a complicated recurrence, which involves quite a few additions, subtractions, multiplications and divisions.
The problem is: I am pretty certain that in my case all
an=0.5
exactly. But the algorithm I have made inR
quickly loses the digits, giving0.4999999981034791707302
instead on the 5th step. What can I change in the algorithm to avoid this problem?
Here's the code:
#Moments of sin(pi x) on [0,1] (hypergeometric function)
FIm <- function(n,N){ z <- -pi^2/4;
f <- 1;
k <- 0;
a <- (n+2)/2;
b <- 3/2;
c <- (n+4)/2;
while(k < N){f <- 1+f*z*(N-1-k+a)/(N-k)/(N-1-k+b)/(N-1-k+c);
k <- k+1}
return(f*pi/(n+2))};
#Number of quadrature points
Nq <- 5;
n <- 0:(2*Nq+1);
#Moments
mu <- FIm(n,35);
#Recurrence parameters
an <- rep(0,Nq+1);
bn <- rep(0,Nq+1);
sn <- rep(0,Nq+1);
#Initial values
sn[1] <- mu[1];
an[1] <- mu[2]/sn[1];
#Coefficients of the orthogonal polynomials
Ank <- matrix(rep(0,(Nq+1)^2), nrow = Nq+1, ncol = Nq+1, byrow=TRUE);
#Initial values
Ank[1,1] <- 1;
Ank[2,1] <- - an[1];
Ank[2,2] <- 1;
#Starting recurrence
nn <- 2;
while(nn <= Nq){#Computing the coefficients of the squared polynomial
Blj <- outer(Ank[nn,], Ank[nn,], FUN = "*");
Cj <- rep(0,2*nn-1);
j <- 1;
while(j <= nn){l <- j;
while(l <= nn){if(j==l){Cj[j+l-1] <- Cj[j+l-1]+Blj[j,l]} else{Cj[j+l-1] <- Cj[j+l-1]+2*Blj[j,l]};
l <- l+1};
j <- j+1};
#Computing the inner products and applying the recurrence relations
sn[nn] <- sum(Cj*mu[1:(2*nn-1)]);
an[nn] <- sum(Cj*mu[2:(2*nn)])/sn[nn];
bn[nn] <- sn[nn]/sn[nn-1];
k <- 1;
while(k <= nn+1){if(k>1){Ank[nn+1,k] <- Ank[nn+1,k]+Ank[nn,k-1]};
Ank[nn+1,k] <- Ank[nn+1,k]-an[nn]*Ank[nn,k]-bn[nn]*Ank[nn-1,k];
k <- k+1};
nn <- nn+1};
#Computing the coefficients of the squared polynomial
Blj <- outer(Ank[nn,], Ank[nn,], FUN = "*");
Cj <- rep(0,2*nn-1);
j <- 1;
while(j <= nn){l <- j;
while(l <= nn){if(j==l){Cj[j+l-1] <- Cj[j+l-1]+Blj[j,l]} else{Cj[j+l-1] <- Cj[j+l-1]+2*Blj[j,l]};
l <- l+1};
j <- j+1};
#Computing the inner products and applying the recurrence relations
sn[nn] <- sum(Cj*mu[1:(2*nn-1)]);
an[nn] <- sum(Cj*mu[2:(2*nn)])/sn[nn];
bn[nn] <- sn[nn]/sn[nn-1];
an
The output I get for an
is:
[1] 0.5000000000000000000000 0.5000000000000004440892 0.4999999999999593103261
[4] 0.4999999999963960495286 0.4999999998869631423482 0.4999999981034791707302
An obvious problem could be the computation of the moments, as it's done here, but increasing the number of terms N
doesn't help, and more importantly, using the exact values for the moments doesn't change the output at all:
mu[1] <- 2/pi;
mu[2] <- 1/pi;
mu[3] <- 1/pi-4/pi^3;
mu[4] <- 1/pi-6/pi^3;
mu[5] <- (48 - 12 pi^2 + pi^4)/pi^5;
mu[6] <- (120 - 20 pi^2 + pi^4)/pi^5;
mu[7] <- (-1440 + 360 pi^2 - 30 pi^4 + pi^6)/pi^7;
mu[8] <- (-5040 + 840 pi^2 - 42 pi^4 + pi^6)/pi^7;
mu[9] <- (80640 - 20160 pi^2 + 1680 pi^4 - 56 pi^6 + pi^8)/pi^9;
mu[10] <- (362880 - 60480 pi^2 + 3024 pi^4 - 72 pi^6 + pi^8)/pi^9;
mu[11] <- (-7257600 + 1814400 pi^2 - 151200 pi^4 + 5040 pi^6 - 90 pi^8 + pi^10)/pi^11;
mu[12] <- (-39916800 + 6652800 pi^2 - 332640 pi^4 + 7920 pi^6 - 110 pi^8 + pi^10)/pi^11;
Using R
for this task is my personal preference (as well as a learning opportunity), so if you think I need to use another language, I guess I will just do this in Mathematica, where the precision could be set arbitrarily high.
r floating-accuracy significant-digits
I am not a programmer (this is my first post here), and don't have much experience with floating point arithmetic. I apologize if I missed something obvious.
I have been trying to find the parameters for Gaussian quadrature with custom weight function, using the general method described for example here. The method works, as checked for small number of points, when the parameters can be found by hand.
However, for large number of quadrature points it makes sense to compute the parameters numerically. The moments can be expressed through a hypergeometric function, which is given by the quickly converging series, which I am using here.
My algorithm for computing the necessary parameters an
and bn
involves finding the coefficients of the polynomials explicitly and using the formulas provided in the reference. In the end we have a complicated recurrence, which involves quite a few additions, subtractions, multiplications and divisions.
The problem is: I am pretty certain that in my case all
an=0.5
exactly. But the algorithm I have made inR
quickly loses the digits, giving0.4999999981034791707302
instead on the 5th step. What can I change in the algorithm to avoid this problem?
Here's the code:
#Moments of sin(pi x) on [0,1] (hypergeometric function)
FIm <- function(n,N){ z <- -pi^2/4;
f <- 1;
k <- 0;
a <- (n+2)/2;
b <- 3/2;
c <- (n+4)/2;
while(k < N){f <- 1+f*z*(N-1-k+a)/(N-k)/(N-1-k+b)/(N-1-k+c);
k <- k+1}
return(f*pi/(n+2))};
#Number of quadrature points
Nq <- 5;
n <- 0:(2*Nq+1);
#Moments
mu <- FIm(n,35);
#Recurrence parameters
an <- rep(0,Nq+1);
bn <- rep(0,Nq+1);
sn <- rep(0,Nq+1);
#Initial values
sn[1] <- mu[1];
an[1] <- mu[2]/sn[1];
#Coefficients of the orthogonal polynomials
Ank <- matrix(rep(0,(Nq+1)^2), nrow = Nq+1, ncol = Nq+1, byrow=TRUE);
#Initial values
Ank[1,1] <- 1;
Ank[2,1] <- - an[1];
Ank[2,2] <- 1;
#Starting recurrence
nn <- 2;
while(nn <= Nq){#Computing the coefficients of the squared polynomial
Blj <- outer(Ank[nn,], Ank[nn,], FUN = "*");
Cj <- rep(0,2*nn-1);
j <- 1;
while(j <= nn){l <- j;
while(l <= nn){if(j==l){Cj[j+l-1] <- Cj[j+l-1]+Blj[j,l]} else{Cj[j+l-1] <- Cj[j+l-1]+2*Blj[j,l]};
l <- l+1};
j <- j+1};
#Computing the inner products and applying the recurrence relations
sn[nn] <- sum(Cj*mu[1:(2*nn-1)]);
an[nn] <- sum(Cj*mu[2:(2*nn)])/sn[nn];
bn[nn] <- sn[nn]/sn[nn-1];
k <- 1;
while(k <= nn+1){if(k>1){Ank[nn+1,k] <- Ank[nn+1,k]+Ank[nn,k-1]};
Ank[nn+1,k] <- Ank[nn+1,k]-an[nn]*Ank[nn,k]-bn[nn]*Ank[nn-1,k];
k <- k+1};
nn <- nn+1};
#Computing the coefficients of the squared polynomial
Blj <- outer(Ank[nn,], Ank[nn,], FUN = "*");
Cj <- rep(0,2*nn-1);
j <- 1;
while(j <= nn){l <- j;
while(l <= nn){if(j==l){Cj[j+l-1] <- Cj[j+l-1]+Blj[j,l]} else{Cj[j+l-1] <- Cj[j+l-1]+2*Blj[j,l]};
l <- l+1};
j <- j+1};
#Computing the inner products and applying the recurrence relations
sn[nn] <- sum(Cj*mu[1:(2*nn-1)]);
an[nn] <- sum(Cj*mu[2:(2*nn)])/sn[nn];
bn[nn] <- sn[nn]/sn[nn-1];
an
The output I get for an
is:
[1] 0.5000000000000000000000 0.5000000000000004440892 0.4999999999999593103261
[4] 0.4999999999963960495286 0.4999999998869631423482 0.4999999981034791707302
An obvious problem could be the computation of the moments, as it's done here, but increasing the number of terms N
doesn't help, and more importantly, using the exact values for the moments doesn't change the output at all:
mu[1] <- 2/pi;
mu[2] <- 1/pi;
mu[3] <- 1/pi-4/pi^3;
mu[4] <- 1/pi-6/pi^3;
mu[5] <- (48 - 12 pi^2 + pi^4)/pi^5;
mu[6] <- (120 - 20 pi^2 + pi^4)/pi^5;
mu[7] <- (-1440 + 360 pi^2 - 30 pi^4 + pi^6)/pi^7;
mu[8] <- (-5040 + 840 pi^2 - 42 pi^4 + pi^6)/pi^7;
mu[9] <- (80640 - 20160 pi^2 + 1680 pi^4 - 56 pi^6 + pi^8)/pi^9;
mu[10] <- (362880 - 60480 pi^2 + 3024 pi^4 - 72 pi^6 + pi^8)/pi^9;
mu[11] <- (-7257600 + 1814400 pi^2 - 151200 pi^4 + 5040 pi^6 - 90 pi^8 + pi^10)/pi^11;
mu[12] <- (-39916800 + 6652800 pi^2 - 332640 pi^4 + 7920 pi^6 - 110 pi^8 + pi^10)/pi^11;
Using R
for this task is my personal preference (as well as a learning opportunity), so if you think I need to use another language, I guess I will just do this in Mathematica, where the precision could be set arbitrarily high.
r floating-accuracy significant-digits
r floating-accuracy significant-digits
asked Nov 21 '18 at 10:17


Yuriy SYuriy S
1033
1033
1
Floating-point arithmetic is generally not intended to provide exact answers as you appear to be using it. So you should expect to get approximate answers unless you take special steps to ensure the results are exact. Is there a reason you expect exact answers?
– Eric Postpischil
Nov 21 '18 at 19:19
@EricPostpischil, I don't expect exact answers, but I would like if the loss of digits was a little less pronounced, that's all. I know there are steps to ensure that (like Kahan's summation algorithm), but it doesn't seem to help in this case
– Yuriy S
Nov 22 '18 at 5:29
add a comment |
1
Floating-point arithmetic is generally not intended to provide exact answers as you appear to be using it. So you should expect to get approximate answers unless you take special steps to ensure the results are exact. Is there a reason you expect exact answers?
– Eric Postpischil
Nov 21 '18 at 19:19
@EricPostpischil, I don't expect exact answers, but I would like if the loss of digits was a little less pronounced, that's all. I know there are steps to ensure that (like Kahan's summation algorithm), but it doesn't seem to help in this case
– Yuriy S
Nov 22 '18 at 5:29
1
1
Floating-point arithmetic is generally not intended to provide exact answers as you appear to be using it. So you should expect to get approximate answers unless you take special steps to ensure the results are exact. Is there a reason you expect exact answers?
– Eric Postpischil
Nov 21 '18 at 19:19
Floating-point arithmetic is generally not intended to provide exact answers as you appear to be using it. So you should expect to get approximate answers unless you take special steps to ensure the results are exact. Is there a reason you expect exact answers?
– Eric Postpischil
Nov 21 '18 at 19:19
@EricPostpischil, I don't expect exact answers, but I would like if the loss of digits was a little less pronounced, that's all. I know there are steps to ensure that (like Kahan's summation algorithm), but it doesn't seem to help in this case
– Yuriy S
Nov 22 '18 at 5:29
@EricPostpischil, I don't expect exact answers, but I would like if the loss of digits was a little less pronounced, that's all. I know there are steps to ensure that (like Kahan's summation algorithm), but it doesn't seem to help in this case
– Yuriy S
Nov 22 '18 at 5:29
add a comment |
1 Answer
1
active
oldest
votes
In the paper you cited:
However, the solution of the set of algebraic
equations for the coefficients aj and bj in terms of the moments µk is extremely ill-conditioned: “Even in double precision it is not unusual to lose all accuracy by the time n = 12” [1].
The problem of solving aj and bj given µk is extremely ill-conditioned, and gets exponentially worse with increased number of points. In other words, a tiny change in µk (due to limited precision of floating point numbers) leads to a large change in corresponding aj and bj.
To get accurate results with this method, it is necessary to compute µk significantly more accurately.
The paper you sited, for example, finds it necessary to compute µk to an accuracy of thousands of digits for n=64.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53409820%2floss-of-significance-in-a-complicated-recurrence-in-r-related-to-gaussian-quadr%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
In the paper you cited:
However, the solution of the set of algebraic
equations for the coefficients aj and bj in terms of the moments µk is extremely ill-conditioned: “Even in double precision it is not unusual to lose all accuracy by the time n = 12” [1].
The problem of solving aj and bj given µk is extremely ill-conditioned, and gets exponentially worse with increased number of points. In other words, a tiny change in µk (due to limited precision of floating point numbers) leads to a large change in corresponding aj and bj.
To get accurate results with this method, it is necessary to compute µk significantly more accurately.
The paper you sited, for example, finds it necessary to compute µk to an accuracy of thousands of digits for n=64.
add a comment |
In the paper you cited:
However, the solution of the set of algebraic
equations for the coefficients aj and bj in terms of the moments µk is extremely ill-conditioned: “Even in double precision it is not unusual to lose all accuracy by the time n = 12” [1].
The problem of solving aj and bj given µk is extremely ill-conditioned, and gets exponentially worse with increased number of points. In other words, a tiny change in µk (due to limited precision of floating point numbers) leads to a large change in corresponding aj and bj.
To get accurate results with this method, it is necessary to compute µk significantly more accurately.
The paper you sited, for example, finds it necessary to compute µk to an accuracy of thousands of digits for n=64.
add a comment |
In the paper you cited:
However, the solution of the set of algebraic
equations for the coefficients aj and bj in terms of the moments µk is extremely ill-conditioned: “Even in double precision it is not unusual to lose all accuracy by the time n = 12” [1].
The problem of solving aj and bj given µk is extremely ill-conditioned, and gets exponentially worse with increased number of points. In other words, a tiny change in µk (due to limited precision of floating point numbers) leads to a large change in corresponding aj and bj.
To get accurate results with this method, it is necessary to compute µk significantly more accurately.
The paper you sited, for example, finds it necessary to compute µk to an accuracy of thousands of digits for n=64.
In the paper you cited:
However, the solution of the set of algebraic
equations for the coefficients aj and bj in terms of the moments µk is extremely ill-conditioned: “Even in double precision it is not unusual to lose all accuracy by the time n = 12” [1].
The problem of solving aj and bj given µk is extremely ill-conditioned, and gets exponentially worse with increased number of points. In other words, a tiny change in µk (due to limited precision of floating point numbers) leads to a large change in corresponding aj and bj.
To get accurate results with this method, it is necessary to compute µk significantly more accurately.
The paper you sited, for example, finds it necessary to compute µk to an accuracy of thousands of digits for n=64.
answered Nov 24 '18 at 8:47
statementreplystatementreply
713
713
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53409820%2floss-of-significance-in-a-complicated-recurrence-in-r-related-to-gaussian-quadr%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Floating-point arithmetic is generally not intended to provide exact answers as you appear to be using it. So you should expect to get approximate answers unless you take special steps to ensure the results are exact. Is there a reason you expect exact answers?
– Eric Postpischil
Nov 21 '18 at 19:19
@EricPostpischil, I don't expect exact answers, but I would like if the loss of digits was a little less pronounced, that's all. I know there are steps to ensure that (like Kahan's summation algorithm), but it doesn't seem to help in this case
– Yuriy S
Nov 22 '18 at 5:29