Linear approximation to xln(x)
$begingroup$
Suppose we need to approximate $f(8.4)$ where $f(x) = mathbb{xln(x)}$ by using a linear polynomial . We have the following points as nodes : $x_0=8.1 , x_1 = 8.3 , x_2 = 8.6 , x_3 = 8.7$ .
I realize that we can use Lagrange Interpolation to fit a linear polynomial by choosing any two nodal points . But , as I have been given $4$ nodal points , how should I choose the best two points for the approximation (without evaluating $f$ directly at $x = 8.4$ ) ?
numerical-methods numerical-optimization lagrange-interpolation
$endgroup$
add a comment |
$begingroup$
Suppose we need to approximate $f(8.4)$ where $f(x) = mathbb{xln(x)}$ by using a linear polynomial . We have the following points as nodes : $x_0=8.1 , x_1 = 8.3 , x_2 = 8.6 , x_3 = 8.7$ .
I realize that we can use Lagrange Interpolation to fit a linear polynomial by choosing any two nodal points . But , as I have been given $4$ nodal points , how should I choose the best two points for the approximation (without evaluating $f$ directly at $x = 8.4$ ) ?
numerical-methods numerical-optimization lagrange-interpolation
$endgroup$
add a comment |
$begingroup$
Suppose we need to approximate $f(8.4)$ where $f(x) = mathbb{xln(x)}$ by using a linear polynomial . We have the following points as nodes : $x_0=8.1 , x_1 = 8.3 , x_2 = 8.6 , x_3 = 8.7$ .
I realize that we can use Lagrange Interpolation to fit a linear polynomial by choosing any two nodal points . But , as I have been given $4$ nodal points , how should I choose the best two points for the approximation (without evaluating $f$ directly at $x = 8.4$ ) ?
numerical-methods numerical-optimization lagrange-interpolation
$endgroup$
Suppose we need to approximate $f(8.4)$ where $f(x) = mathbb{xln(x)}$ by using a linear polynomial . We have the following points as nodes : $x_0=8.1 , x_1 = 8.3 , x_2 = 8.6 , x_3 = 8.7$ .
I realize that we can use Lagrange Interpolation to fit a linear polynomial by choosing any two nodal points . But , as I have been given $4$ nodal points , how should I choose the best two points for the approximation (without evaluating $f$ directly at $x = 8.4$ ) ?
numerical-methods numerical-optimization lagrange-interpolation
numerical-methods numerical-optimization lagrange-interpolation
asked Feb 3 at 5:11
JohnJohn
363113
363113
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
I would use a piecewise linear function, making a linear function between each pair of points. As $8.4$ is between $8.3$ and $8.6$, use those two points.
$endgroup$
$begingroup$
I am aware of the fact that piecewise linear function can be used , but our driving force should be to minimize the error in finding f at 8.4 , isn't it ? (Please correct me if I am saying something wrong ) . So , basically if we know f at two nodal points then , we cannot extract more information about f(8.4) from the other two nodal points that we have not used ?
$endgroup$
– John
Feb 3 at 5:28
1
$begingroup$
You can't with a linear function. For a linear function you are best served by using the two closest points. If you had data at $8.0,8.3,9.0$ you could argue you should use the $8.0,8.3$ data, but it is also attractive to use bracketing points. Extracting information from other points corresponds to using a higher order interpolation. That is a fine idea, but your question prohibits it.
$endgroup$
– Ross Millikan
Feb 3 at 5:33
$begingroup$
Thank you so much ! Also , what are 'bracketing points' ? I haven't heard of it before !
$endgroup$
– John
Feb 3 at 5:39
1
$begingroup$
Bracketing points are ones on opposite sides of the point of interest. For interpolation, they are usually the ones next to the point of interest. For 1D root finding they are points where the function is known to have opposite signs, meaning (assuming it is continuous) there is a root somewhere in between.
$endgroup$
– Ross Millikan
Feb 4 at 14:06
add a comment |
$begingroup$
What I would do is to consider
$$Phi=int_{8.1}^{8.7} (x log(x)-a-bx)^2 ,dx$$ which is equivalent to a linear regression with an infinite number of data points.
Use the fact that, expanding and using integrations by parts
$$I=int (x log(x)-a-bx)^2 ,dx$$ $$I=frac{1}{54} x left(54 a^2+27 a (2 b x+x)-6 x log (x) (9 a+6 b x+2 x)+2 left(9
b^2+6 b+2right) x^2+18 x^2 log ^2(x)right)$$ Use the bounds to get
$$Phi=0.6 a^2+a (10.08 b-21.4547)+42.354 b^2-180.332 b+191.97$$ Compute the partial derivatives and set them equal to $0$; this gives two equations for two unknowns $(a,b)$ and the solution would be ${a= -8.39714,b= 3.12810}$.
Then, for $x=8.4$, you will get $17.8789$ while the exact value is $17.8771$.
Edit
Using exact arithmetic, it is possible to compute the exact values of parameters $a$ and $b$ using the classical least square method for the four data points as well as for the integration. The formulae will not be reported here (too messy); for sure, they are not very different $(0.008text{%})$. However
$$frac{Phi_{ols}}{Phi_{int}}=2.0414$$
$endgroup$
$begingroup$
Never thought it this way ! Can you please suggest some reading on this approach . But , note that that linear approximation yields $17.87833$ which is actually a bit more accurate than this method .
$endgroup$
– John
Feb 3 at 6:19
1
$begingroup$
@John. As I wrote, this is equivalent to the linear regression with an infinite number of data points. Then, the summation is transformed to an integral. Think about it and you will see that it is simple. The result is the best fit over the entire range of $x$.
$endgroup$
– Claude Leibovici
Feb 3 at 6:44
$begingroup$
Yes, you are correct ! I am still a novice learner , so I could not appreciate this fact earlier . Thanks a lot ! :)
$endgroup$
– John
Feb 3 at 7:01
1
$begingroup$
@John. Almost every single day, I am still a novice learner too. This is the beauty of mathematics; it is a so vaste domain !
$endgroup$
– Claude Leibovici
Feb 3 at 7:29
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3098209%2flinear-approximation-to-xlnx%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I would use a piecewise linear function, making a linear function between each pair of points. As $8.4$ is between $8.3$ and $8.6$, use those two points.
$endgroup$
$begingroup$
I am aware of the fact that piecewise linear function can be used , but our driving force should be to minimize the error in finding f at 8.4 , isn't it ? (Please correct me if I am saying something wrong ) . So , basically if we know f at two nodal points then , we cannot extract more information about f(8.4) from the other two nodal points that we have not used ?
$endgroup$
– John
Feb 3 at 5:28
1
$begingroup$
You can't with a linear function. For a linear function you are best served by using the two closest points. If you had data at $8.0,8.3,9.0$ you could argue you should use the $8.0,8.3$ data, but it is also attractive to use bracketing points. Extracting information from other points corresponds to using a higher order interpolation. That is a fine idea, but your question prohibits it.
$endgroup$
– Ross Millikan
Feb 3 at 5:33
$begingroup$
Thank you so much ! Also , what are 'bracketing points' ? I haven't heard of it before !
$endgroup$
– John
Feb 3 at 5:39
1
$begingroup$
Bracketing points are ones on opposite sides of the point of interest. For interpolation, they are usually the ones next to the point of interest. For 1D root finding they are points where the function is known to have opposite signs, meaning (assuming it is continuous) there is a root somewhere in between.
$endgroup$
– Ross Millikan
Feb 4 at 14:06
add a comment |
$begingroup$
I would use a piecewise linear function, making a linear function between each pair of points. As $8.4$ is between $8.3$ and $8.6$, use those two points.
$endgroup$
$begingroup$
I am aware of the fact that piecewise linear function can be used , but our driving force should be to minimize the error in finding f at 8.4 , isn't it ? (Please correct me if I am saying something wrong ) . So , basically if we know f at two nodal points then , we cannot extract more information about f(8.4) from the other two nodal points that we have not used ?
$endgroup$
– John
Feb 3 at 5:28
1
$begingroup$
You can't with a linear function. For a linear function you are best served by using the two closest points. If you had data at $8.0,8.3,9.0$ you could argue you should use the $8.0,8.3$ data, but it is also attractive to use bracketing points. Extracting information from other points corresponds to using a higher order interpolation. That is a fine idea, but your question prohibits it.
$endgroup$
– Ross Millikan
Feb 3 at 5:33
$begingroup$
Thank you so much ! Also , what are 'bracketing points' ? I haven't heard of it before !
$endgroup$
– John
Feb 3 at 5:39
1
$begingroup$
Bracketing points are ones on opposite sides of the point of interest. For interpolation, they are usually the ones next to the point of interest. For 1D root finding they are points where the function is known to have opposite signs, meaning (assuming it is continuous) there is a root somewhere in between.
$endgroup$
– Ross Millikan
Feb 4 at 14:06
add a comment |
$begingroup$
I would use a piecewise linear function, making a linear function between each pair of points. As $8.4$ is between $8.3$ and $8.6$, use those two points.
$endgroup$
I would use a piecewise linear function, making a linear function between each pair of points. As $8.4$ is between $8.3$ and $8.6$, use those two points.
answered Feb 3 at 5:22


Ross MillikanRoss Millikan
301k24200375
301k24200375
$begingroup$
I am aware of the fact that piecewise linear function can be used , but our driving force should be to minimize the error in finding f at 8.4 , isn't it ? (Please correct me if I am saying something wrong ) . So , basically if we know f at two nodal points then , we cannot extract more information about f(8.4) from the other two nodal points that we have not used ?
$endgroup$
– John
Feb 3 at 5:28
1
$begingroup$
You can't with a linear function. For a linear function you are best served by using the two closest points. If you had data at $8.0,8.3,9.0$ you could argue you should use the $8.0,8.3$ data, but it is also attractive to use bracketing points. Extracting information from other points corresponds to using a higher order interpolation. That is a fine idea, but your question prohibits it.
$endgroup$
– Ross Millikan
Feb 3 at 5:33
$begingroup$
Thank you so much ! Also , what are 'bracketing points' ? I haven't heard of it before !
$endgroup$
– John
Feb 3 at 5:39
1
$begingroup$
Bracketing points are ones on opposite sides of the point of interest. For interpolation, they are usually the ones next to the point of interest. For 1D root finding they are points where the function is known to have opposite signs, meaning (assuming it is continuous) there is a root somewhere in between.
$endgroup$
– Ross Millikan
Feb 4 at 14:06
add a comment |
$begingroup$
I am aware of the fact that piecewise linear function can be used , but our driving force should be to minimize the error in finding f at 8.4 , isn't it ? (Please correct me if I am saying something wrong ) . So , basically if we know f at two nodal points then , we cannot extract more information about f(8.4) from the other two nodal points that we have not used ?
$endgroup$
– John
Feb 3 at 5:28
1
$begingroup$
You can't with a linear function. For a linear function you are best served by using the two closest points. If you had data at $8.0,8.3,9.0$ you could argue you should use the $8.0,8.3$ data, but it is also attractive to use bracketing points. Extracting information from other points corresponds to using a higher order interpolation. That is a fine idea, but your question prohibits it.
$endgroup$
– Ross Millikan
Feb 3 at 5:33
$begingroup$
Thank you so much ! Also , what are 'bracketing points' ? I haven't heard of it before !
$endgroup$
– John
Feb 3 at 5:39
1
$begingroup$
Bracketing points are ones on opposite sides of the point of interest. For interpolation, they are usually the ones next to the point of interest. For 1D root finding they are points where the function is known to have opposite signs, meaning (assuming it is continuous) there is a root somewhere in between.
$endgroup$
– Ross Millikan
Feb 4 at 14:06
$begingroup$
I am aware of the fact that piecewise linear function can be used , but our driving force should be to minimize the error in finding f at 8.4 , isn't it ? (Please correct me if I am saying something wrong ) . So , basically if we know f at two nodal points then , we cannot extract more information about f(8.4) from the other two nodal points that we have not used ?
$endgroup$
– John
Feb 3 at 5:28
$begingroup$
I am aware of the fact that piecewise linear function can be used , but our driving force should be to minimize the error in finding f at 8.4 , isn't it ? (Please correct me if I am saying something wrong ) . So , basically if we know f at two nodal points then , we cannot extract more information about f(8.4) from the other two nodal points that we have not used ?
$endgroup$
– John
Feb 3 at 5:28
1
1
$begingroup$
You can't with a linear function. For a linear function you are best served by using the two closest points. If you had data at $8.0,8.3,9.0$ you could argue you should use the $8.0,8.3$ data, but it is also attractive to use bracketing points. Extracting information from other points corresponds to using a higher order interpolation. That is a fine idea, but your question prohibits it.
$endgroup$
– Ross Millikan
Feb 3 at 5:33
$begingroup$
You can't with a linear function. For a linear function you are best served by using the two closest points. If you had data at $8.0,8.3,9.0$ you could argue you should use the $8.0,8.3$ data, but it is also attractive to use bracketing points. Extracting information from other points corresponds to using a higher order interpolation. That is a fine idea, but your question prohibits it.
$endgroup$
– Ross Millikan
Feb 3 at 5:33
$begingroup$
Thank you so much ! Also , what are 'bracketing points' ? I haven't heard of it before !
$endgroup$
– John
Feb 3 at 5:39
$begingroup$
Thank you so much ! Also , what are 'bracketing points' ? I haven't heard of it before !
$endgroup$
– John
Feb 3 at 5:39
1
1
$begingroup$
Bracketing points are ones on opposite sides of the point of interest. For interpolation, they are usually the ones next to the point of interest. For 1D root finding they are points where the function is known to have opposite signs, meaning (assuming it is continuous) there is a root somewhere in between.
$endgroup$
– Ross Millikan
Feb 4 at 14:06
$begingroup$
Bracketing points are ones on opposite sides of the point of interest. For interpolation, they are usually the ones next to the point of interest. For 1D root finding they are points where the function is known to have opposite signs, meaning (assuming it is continuous) there is a root somewhere in between.
$endgroup$
– Ross Millikan
Feb 4 at 14:06
add a comment |
$begingroup$
What I would do is to consider
$$Phi=int_{8.1}^{8.7} (x log(x)-a-bx)^2 ,dx$$ which is equivalent to a linear regression with an infinite number of data points.
Use the fact that, expanding and using integrations by parts
$$I=int (x log(x)-a-bx)^2 ,dx$$ $$I=frac{1}{54} x left(54 a^2+27 a (2 b x+x)-6 x log (x) (9 a+6 b x+2 x)+2 left(9
b^2+6 b+2right) x^2+18 x^2 log ^2(x)right)$$ Use the bounds to get
$$Phi=0.6 a^2+a (10.08 b-21.4547)+42.354 b^2-180.332 b+191.97$$ Compute the partial derivatives and set them equal to $0$; this gives two equations for two unknowns $(a,b)$ and the solution would be ${a= -8.39714,b= 3.12810}$.
Then, for $x=8.4$, you will get $17.8789$ while the exact value is $17.8771$.
Edit
Using exact arithmetic, it is possible to compute the exact values of parameters $a$ and $b$ using the classical least square method for the four data points as well as for the integration. The formulae will not be reported here (too messy); for sure, they are not very different $(0.008text{%})$. However
$$frac{Phi_{ols}}{Phi_{int}}=2.0414$$
$endgroup$
$begingroup$
Never thought it this way ! Can you please suggest some reading on this approach . But , note that that linear approximation yields $17.87833$ which is actually a bit more accurate than this method .
$endgroup$
– John
Feb 3 at 6:19
1
$begingroup$
@John. As I wrote, this is equivalent to the linear regression with an infinite number of data points. Then, the summation is transformed to an integral. Think about it and you will see that it is simple. The result is the best fit over the entire range of $x$.
$endgroup$
– Claude Leibovici
Feb 3 at 6:44
$begingroup$
Yes, you are correct ! I am still a novice learner , so I could not appreciate this fact earlier . Thanks a lot ! :)
$endgroup$
– John
Feb 3 at 7:01
1
$begingroup$
@John. Almost every single day, I am still a novice learner too. This is the beauty of mathematics; it is a so vaste domain !
$endgroup$
– Claude Leibovici
Feb 3 at 7:29
add a comment |
$begingroup$
What I would do is to consider
$$Phi=int_{8.1}^{8.7} (x log(x)-a-bx)^2 ,dx$$ which is equivalent to a linear regression with an infinite number of data points.
Use the fact that, expanding and using integrations by parts
$$I=int (x log(x)-a-bx)^2 ,dx$$ $$I=frac{1}{54} x left(54 a^2+27 a (2 b x+x)-6 x log (x) (9 a+6 b x+2 x)+2 left(9
b^2+6 b+2right) x^2+18 x^2 log ^2(x)right)$$ Use the bounds to get
$$Phi=0.6 a^2+a (10.08 b-21.4547)+42.354 b^2-180.332 b+191.97$$ Compute the partial derivatives and set them equal to $0$; this gives two equations for two unknowns $(a,b)$ and the solution would be ${a= -8.39714,b= 3.12810}$.
Then, for $x=8.4$, you will get $17.8789$ while the exact value is $17.8771$.
Edit
Using exact arithmetic, it is possible to compute the exact values of parameters $a$ and $b$ using the classical least square method for the four data points as well as for the integration. The formulae will not be reported here (too messy); for sure, they are not very different $(0.008text{%})$. However
$$frac{Phi_{ols}}{Phi_{int}}=2.0414$$
$endgroup$
$begingroup$
Never thought it this way ! Can you please suggest some reading on this approach . But , note that that linear approximation yields $17.87833$ which is actually a bit more accurate than this method .
$endgroup$
– John
Feb 3 at 6:19
1
$begingroup$
@John. As I wrote, this is equivalent to the linear regression with an infinite number of data points. Then, the summation is transformed to an integral. Think about it and you will see that it is simple. The result is the best fit over the entire range of $x$.
$endgroup$
– Claude Leibovici
Feb 3 at 6:44
$begingroup$
Yes, you are correct ! I am still a novice learner , so I could not appreciate this fact earlier . Thanks a lot ! :)
$endgroup$
– John
Feb 3 at 7:01
1
$begingroup$
@John. Almost every single day, I am still a novice learner too. This is the beauty of mathematics; it is a so vaste domain !
$endgroup$
– Claude Leibovici
Feb 3 at 7:29
add a comment |
$begingroup$
What I would do is to consider
$$Phi=int_{8.1}^{8.7} (x log(x)-a-bx)^2 ,dx$$ which is equivalent to a linear regression with an infinite number of data points.
Use the fact that, expanding and using integrations by parts
$$I=int (x log(x)-a-bx)^2 ,dx$$ $$I=frac{1}{54} x left(54 a^2+27 a (2 b x+x)-6 x log (x) (9 a+6 b x+2 x)+2 left(9
b^2+6 b+2right) x^2+18 x^2 log ^2(x)right)$$ Use the bounds to get
$$Phi=0.6 a^2+a (10.08 b-21.4547)+42.354 b^2-180.332 b+191.97$$ Compute the partial derivatives and set them equal to $0$; this gives two equations for two unknowns $(a,b)$ and the solution would be ${a= -8.39714,b= 3.12810}$.
Then, for $x=8.4$, you will get $17.8789$ while the exact value is $17.8771$.
Edit
Using exact arithmetic, it is possible to compute the exact values of parameters $a$ and $b$ using the classical least square method for the four data points as well as for the integration. The formulae will not be reported here (too messy); for sure, they are not very different $(0.008text{%})$. However
$$frac{Phi_{ols}}{Phi_{int}}=2.0414$$
$endgroup$
What I would do is to consider
$$Phi=int_{8.1}^{8.7} (x log(x)-a-bx)^2 ,dx$$ which is equivalent to a linear regression with an infinite number of data points.
Use the fact that, expanding and using integrations by parts
$$I=int (x log(x)-a-bx)^2 ,dx$$ $$I=frac{1}{54} x left(54 a^2+27 a (2 b x+x)-6 x log (x) (9 a+6 b x+2 x)+2 left(9
b^2+6 b+2right) x^2+18 x^2 log ^2(x)right)$$ Use the bounds to get
$$Phi=0.6 a^2+a (10.08 b-21.4547)+42.354 b^2-180.332 b+191.97$$ Compute the partial derivatives and set them equal to $0$; this gives two equations for two unknowns $(a,b)$ and the solution would be ${a= -8.39714,b= 3.12810}$.
Then, for $x=8.4$, you will get $17.8789$ while the exact value is $17.8771$.
Edit
Using exact arithmetic, it is possible to compute the exact values of parameters $a$ and $b$ using the classical least square method for the four data points as well as for the integration. The formulae will not be reported here (too messy); for sure, they are not very different $(0.008text{%})$. However
$$frac{Phi_{ols}}{Phi_{int}}=2.0414$$
edited Feb 3 at 14:23
answered Feb 3 at 6:02
Claude LeiboviciClaude Leibovici
126k1158135
126k1158135
$begingroup$
Never thought it this way ! Can you please suggest some reading on this approach . But , note that that linear approximation yields $17.87833$ which is actually a bit more accurate than this method .
$endgroup$
– John
Feb 3 at 6:19
1
$begingroup$
@John. As I wrote, this is equivalent to the linear regression with an infinite number of data points. Then, the summation is transformed to an integral. Think about it and you will see that it is simple. The result is the best fit over the entire range of $x$.
$endgroup$
– Claude Leibovici
Feb 3 at 6:44
$begingroup$
Yes, you are correct ! I am still a novice learner , so I could not appreciate this fact earlier . Thanks a lot ! :)
$endgroup$
– John
Feb 3 at 7:01
1
$begingroup$
@John. Almost every single day, I am still a novice learner too. This is the beauty of mathematics; it is a so vaste domain !
$endgroup$
– Claude Leibovici
Feb 3 at 7:29
add a comment |
$begingroup$
Never thought it this way ! Can you please suggest some reading on this approach . But , note that that linear approximation yields $17.87833$ which is actually a bit more accurate than this method .
$endgroup$
– John
Feb 3 at 6:19
1
$begingroup$
@John. As I wrote, this is equivalent to the linear regression with an infinite number of data points. Then, the summation is transformed to an integral. Think about it and you will see that it is simple. The result is the best fit over the entire range of $x$.
$endgroup$
– Claude Leibovici
Feb 3 at 6:44
$begingroup$
Yes, you are correct ! I am still a novice learner , so I could not appreciate this fact earlier . Thanks a lot ! :)
$endgroup$
– John
Feb 3 at 7:01
1
$begingroup$
@John. Almost every single day, I am still a novice learner too. This is the beauty of mathematics; it is a so vaste domain !
$endgroup$
– Claude Leibovici
Feb 3 at 7:29
$begingroup$
Never thought it this way ! Can you please suggest some reading on this approach . But , note that that linear approximation yields $17.87833$ which is actually a bit more accurate than this method .
$endgroup$
– John
Feb 3 at 6:19
$begingroup$
Never thought it this way ! Can you please suggest some reading on this approach . But , note that that linear approximation yields $17.87833$ which is actually a bit more accurate than this method .
$endgroup$
– John
Feb 3 at 6:19
1
1
$begingroup$
@John. As I wrote, this is equivalent to the linear regression with an infinite number of data points. Then, the summation is transformed to an integral. Think about it and you will see that it is simple. The result is the best fit over the entire range of $x$.
$endgroup$
– Claude Leibovici
Feb 3 at 6:44
$begingroup$
@John. As I wrote, this is equivalent to the linear regression with an infinite number of data points. Then, the summation is transformed to an integral. Think about it and you will see that it is simple. The result is the best fit over the entire range of $x$.
$endgroup$
– Claude Leibovici
Feb 3 at 6:44
$begingroup$
Yes, you are correct ! I am still a novice learner , so I could not appreciate this fact earlier . Thanks a lot ! :)
$endgroup$
– John
Feb 3 at 7:01
$begingroup$
Yes, you are correct ! I am still a novice learner , so I could not appreciate this fact earlier . Thanks a lot ! :)
$endgroup$
– John
Feb 3 at 7:01
1
1
$begingroup$
@John. Almost every single day, I am still a novice learner too. This is the beauty of mathematics; it is a so vaste domain !
$endgroup$
– Claude Leibovici
Feb 3 at 7:29
$begingroup$
@John. Almost every single day, I am still a novice learner too. This is the beauty of mathematics; it is a so vaste domain !
$endgroup$
– Claude Leibovici
Feb 3 at 7:29
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3098209%2flinear-approximation-to-xlnx%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown