Rational approximation of square roots
$begingroup$
I'm trying to find the best way to solve for rational approximations of the square root of a number, given some pretty serious constraints on the operations I can use to calculate it. My criteria for the algorithm are that it must:
- be solvable through recursion
- operate solely on ratios of integers
- use only addition, subtraction, multiplication, division or equality
(The criteria are important because I'm trying to use c++ to approximate the root at compile-time using template-meta programming, and the available operations are extremely limited)
The approach that I'm considering is supposedly based on an ancient Babylonian method and involves iteratively solving:
$$
k_{n+1} = frac{(k_{n} + N / k_{n})}{2}
$$
Where:
- N is the number whose root we are looking for
- K is the approximation of the root
- K[0] is chosen such that the value of k^2 is less than N
So, it seems I could pretty trivially implement this algorithm if:
- I fix K{0} to be 1, which probably hurts convergence but will work for any N > 1
- If I choose a fixed recursion depth
The problems that I would like to solve are:
- Is there in general a well known method that is faster/more accurate/etc that I should use instead?
- Is there a smarter way that I can choose the initial value of K if N is known?
- Is it possible to easily choose a recursion depth that will converge to 10-14 decimal places, a priori, if N is known? OR
- Can I test for convergence somehow using only the arithmetic mentioned above and equality? (No type of comparison operations are available, but I could for example test that the difference between two ratios is exactly equal to some other ratio)
If it's not already obvious, I'm a programmer, not a mathematician, and have no formal experience with diophantine approximation.
approximation diophantine-approximation
$endgroup$
add a comment |
$begingroup$
I'm trying to find the best way to solve for rational approximations of the square root of a number, given some pretty serious constraints on the operations I can use to calculate it. My criteria for the algorithm are that it must:
- be solvable through recursion
- operate solely on ratios of integers
- use only addition, subtraction, multiplication, division or equality
(The criteria are important because I'm trying to use c++ to approximate the root at compile-time using template-meta programming, and the available operations are extremely limited)
The approach that I'm considering is supposedly based on an ancient Babylonian method and involves iteratively solving:
$$
k_{n+1} = frac{(k_{n} + N / k_{n})}{2}
$$
Where:
- N is the number whose root we are looking for
- K is the approximation of the root
- K[0] is chosen such that the value of k^2 is less than N
So, it seems I could pretty trivially implement this algorithm if:
- I fix K{0} to be 1, which probably hurts convergence but will work for any N > 1
- If I choose a fixed recursion depth
The problems that I would like to solve are:
- Is there in general a well known method that is faster/more accurate/etc that I should use instead?
- Is there a smarter way that I can choose the initial value of K if N is known?
- Is it possible to easily choose a recursion depth that will converge to 10-14 decimal places, a priori, if N is known? OR
- Can I test for convergence somehow using only the arithmetic mentioned above and equality? (No type of comparison operations are available, but I could for example test that the difference between two ratios is exactly equal to some other ratio)
If it's not already obvious, I'm a programmer, not a mathematician, and have no formal experience with diophantine approximation.
approximation diophantine-approximation
$endgroup$
add a comment |
$begingroup$
I'm trying to find the best way to solve for rational approximations of the square root of a number, given some pretty serious constraints on the operations I can use to calculate it. My criteria for the algorithm are that it must:
- be solvable through recursion
- operate solely on ratios of integers
- use only addition, subtraction, multiplication, division or equality
(The criteria are important because I'm trying to use c++ to approximate the root at compile-time using template-meta programming, and the available operations are extremely limited)
The approach that I'm considering is supposedly based on an ancient Babylonian method and involves iteratively solving:
$$
k_{n+1} = frac{(k_{n} + N / k_{n})}{2}
$$
Where:
- N is the number whose root we are looking for
- K is the approximation of the root
- K[0] is chosen such that the value of k^2 is less than N
So, it seems I could pretty trivially implement this algorithm if:
- I fix K{0} to be 1, which probably hurts convergence but will work for any N > 1
- If I choose a fixed recursion depth
The problems that I would like to solve are:
- Is there in general a well known method that is faster/more accurate/etc that I should use instead?
- Is there a smarter way that I can choose the initial value of K if N is known?
- Is it possible to easily choose a recursion depth that will converge to 10-14 decimal places, a priori, if N is known? OR
- Can I test for convergence somehow using only the arithmetic mentioned above and equality? (No type of comparison operations are available, but I could for example test that the difference between two ratios is exactly equal to some other ratio)
If it's not already obvious, I'm a programmer, not a mathematician, and have no formal experience with diophantine approximation.
approximation diophantine-approximation
$endgroup$
I'm trying to find the best way to solve for rational approximations of the square root of a number, given some pretty serious constraints on the operations I can use to calculate it. My criteria for the algorithm are that it must:
- be solvable through recursion
- operate solely on ratios of integers
- use only addition, subtraction, multiplication, division or equality
(The criteria are important because I'm trying to use c++ to approximate the root at compile-time using template-meta programming, and the available operations are extremely limited)
The approach that I'm considering is supposedly based on an ancient Babylonian method and involves iteratively solving:
$$
k_{n+1} = frac{(k_{n} + N / k_{n})}{2}
$$
Where:
- N is the number whose root we are looking for
- K is the approximation of the root
- K[0] is chosen such that the value of k^2 is less than N
So, it seems I could pretty trivially implement this algorithm if:
- I fix K{0} to be 1, which probably hurts convergence but will work for any N > 1
- If I choose a fixed recursion depth
The problems that I would like to solve are:
- Is there in general a well known method that is faster/more accurate/etc that I should use instead?
- Is there a smarter way that I can choose the initial value of K if N is known?
- Is it possible to easily choose a recursion depth that will converge to 10-14 decimal places, a priori, if N is known? OR
- Can I test for convergence somehow using only the arithmetic mentioned above and equality? (No type of comparison operations are available, but I could for example test that the difference between two ratios is exactly equal to some other ratio)
If it's not already obvious, I'm a programmer, not a mathematician, and have no formal experience with diophantine approximation.
approximation diophantine-approximation
approximation diophantine-approximation
asked Mar 30 '16 at 20:54


Nicolas HolthausNicolas Holthaus
1163
1163
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
The typical approach here is probably to use Newton's method (https://en.wikipedia.org/wiki/Newton's_method), which for this case gives:
$k_{n+1} = k_n - frac{k_n^2-N}{2k_n}$
Since you are solving a quadratic this will always converge to the correct root (or it's negative) no matter what starting value you give, however it will be faster if you pick an initial value nearby. If you're able to, you could try combining a lookup table with linear interpolation to get a reasonable initial guess (but maybe that requires too much utility?)
$endgroup$
$begingroup$
lookups and interpolation would require features which won't be available to me at compile time, like you guessed. Is there a simple heuristic I could use to pick a reasonable starting value of k?
$endgroup$
– Nicolas Holthaus
Mar 30 '16 at 21:10
$begingroup$
Do you know what scale numbers you are expecting? If your numbers are going to be in the range 0-100 then you could find a linear function that interpolates in that region and use that.
$endgroup$
– jacob1729
Mar 30 '16 at 21:14
$begingroup$
This is, of course, the OP's proposed scheme. :)
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 21:16
$begingroup$
@AndrewD.Hwang Unless I'm reading it wrong, doesn't the Babylonian approximation substitute k_n for k_n^2 in Newton's formula? That will make it increasingly inaccurate for larger N.
$endgroup$
– jacob1729
Mar 30 '16 at 21:21
1
$begingroup$
Putting the right-hand side of your expression over a common denominator gives $$k_{n+1} = frac{2k_{n}^{2} - (k_{n}^{2} - N)}{2k_{n}} = frac{k_{n} + N/k_{n}}{2}.$$
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 23:52
add a comment |
$begingroup$
Continuing from previous answers and comments, solving for $x$ equation $f(x)=x^k-a$ (for any power $k$), can be achieved with your criteria using Newton method or, faster, with its variants such as Halley, Householder or even higher order methods they do not have a name).
For example, using Halley method,
$$x_{n+1} = x_n - frac {2 f(x_n) f'(x_n)} {2 {[f'(x_n)]}^2 - f(x_n) f''(x_n)} $$ would give $$x_{n+1}=x_n,frac{ a (k+1)+(k-1) x_n^k}{a (k-1)+(k+1) x_n^k}$$ If $k$ is a whole number, if $a$ is at least rational and $x_0$ rational, all iterates will be rational numbers.
For illustration, let me use $k=5$ and $a=frac {123456}{789}$ and use $x_0=2$. The very first iterates will be $$x_1=frac{8768}{3361}$$ $$x_2=frac{5494130652143802990362144}{2000466367873608426046147}$$
$endgroup$
$begingroup$
You might be interested in the generalizations by Bahman Kalantari.
$endgroup$
– J. M. is not a mathematician
May 17 '16 at 4:16
add a comment |
$begingroup$
Here are some of my own, they don't require iteration; just enter in an $n,$ they are extremely accurate; by some small n it should already be greater than your calculator will display and increasing to limit of $0$ error at infinity, and they meet your criteria.
$sqrt{n}sqrt{(n+1)}approxfrac{2n(n+1)}{2n+1}+frac{2n+1}{4(2n+1)^{2}+1}.$
$sqrt{n^{2}+1}approxfrac{2n(n^{2}+1)}{2n^{2}+1}+frac{2n^{2}+1}{n(4(2n^{2}+1)^{2}+1)}.$
$sqrt{n^{2}-1}approxfrac{2n(n^{2}-1)}{2n^{2}-1}+frac{2n^{2}-1}{n(4(2n^{2}-1)^{2}+1)}.$
$sqrt{n}sqrt{(2n+1)}approx(frac{1}{sqrt{2}})(frac{4n(2n+1)}{4n+1}+frac{4n+1}{4(4n+1)^{2}+1}).$
$sqrt{n}sqrt{(2n-1)}approx(frac{1}{sqrt{2}})(frac{4n(2n-1)}{4n-1}+frac{4n-1}{4(4n-1)^{2}+1}).$
At the very least one of these may be a good start for K. As an example for $n=35,$ calculation of $sqrt{35^{2}+1}$ using the second formula above is correct to 19 decimal places, which is already past your convergence margin of $10$-$14$ decimal places in one simple fraction. And, like I said, that increases for higher $n$.
The only downfall is none of these contain just $n.$
If you need more info on their derivation you can look at my post. "Close approximation of absolute value function"
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1720860%2frational-approximation-of-square-roots%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The typical approach here is probably to use Newton's method (https://en.wikipedia.org/wiki/Newton's_method), which for this case gives:
$k_{n+1} = k_n - frac{k_n^2-N}{2k_n}$
Since you are solving a quadratic this will always converge to the correct root (or it's negative) no matter what starting value you give, however it will be faster if you pick an initial value nearby. If you're able to, you could try combining a lookup table with linear interpolation to get a reasonable initial guess (but maybe that requires too much utility?)
$endgroup$
$begingroup$
lookups and interpolation would require features which won't be available to me at compile time, like you guessed. Is there a simple heuristic I could use to pick a reasonable starting value of k?
$endgroup$
– Nicolas Holthaus
Mar 30 '16 at 21:10
$begingroup$
Do you know what scale numbers you are expecting? If your numbers are going to be in the range 0-100 then you could find a linear function that interpolates in that region and use that.
$endgroup$
– jacob1729
Mar 30 '16 at 21:14
$begingroup$
This is, of course, the OP's proposed scheme. :)
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 21:16
$begingroup$
@AndrewD.Hwang Unless I'm reading it wrong, doesn't the Babylonian approximation substitute k_n for k_n^2 in Newton's formula? That will make it increasingly inaccurate for larger N.
$endgroup$
– jacob1729
Mar 30 '16 at 21:21
1
$begingroup$
Putting the right-hand side of your expression over a common denominator gives $$k_{n+1} = frac{2k_{n}^{2} - (k_{n}^{2} - N)}{2k_{n}} = frac{k_{n} + N/k_{n}}{2}.$$
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 23:52
add a comment |
$begingroup$
The typical approach here is probably to use Newton's method (https://en.wikipedia.org/wiki/Newton's_method), which for this case gives:
$k_{n+1} = k_n - frac{k_n^2-N}{2k_n}$
Since you are solving a quadratic this will always converge to the correct root (or it's negative) no matter what starting value you give, however it will be faster if you pick an initial value nearby. If you're able to, you could try combining a lookup table with linear interpolation to get a reasonable initial guess (but maybe that requires too much utility?)
$endgroup$
$begingroup$
lookups and interpolation would require features which won't be available to me at compile time, like you guessed. Is there a simple heuristic I could use to pick a reasonable starting value of k?
$endgroup$
– Nicolas Holthaus
Mar 30 '16 at 21:10
$begingroup$
Do you know what scale numbers you are expecting? If your numbers are going to be in the range 0-100 then you could find a linear function that interpolates in that region and use that.
$endgroup$
– jacob1729
Mar 30 '16 at 21:14
$begingroup$
This is, of course, the OP's proposed scheme. :)
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 21:16
$begingroup$
@AndrewD.Hwang Unless I'm reading it wrong, doesn't the Babylonian approximation substitute k_n for k_n^2 in Newton's formula? That will make it increasingly inaccurate for larger N.
$endgroup$
– jacob1729
Mar 30 '16 at 21:21
1
$begingroup$
Putting the right-hand side of your expression over a common denominator gives $$k_{n+1} = frac{2k_{n}^{2} - (k_{n}^{2} - N)}{2k_{n}} = frac{k_{n} + N/k_{n}}{2}.$$
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 23:52
add a comment |
$begingroup$
The typical approach here is probably to use Newton's method (https://en.wikipedia.org/wiki/Newton's_method), which for this case gives:
$k_{n+1} = k_n - frac{k_n^2-N}{2k_n}$
Since you are solving a quadratic this will always converge to the correct root (or it's negative) no matter what starting value you give, however it will be faster if you pick an initial value nearby. If you're able to, you could try combining a lookup table with linear interpolation to get a reasonable initial guess (but maybe that requires too much utility?)
$endgroup$
The typical approach here is probably to use Newton's method (https://en.wikipedia.org/wiki/Newton's_method), which for this case gives:
$k_{n+1} = k_n - frac{k_n^2-N}{2k_n}$
Since you are solving a quadratic this will always converge to the correct root (or it's negative) no matter what starting value you give, however it will be faster if you pick an initial value nearby. If you're able to, you could try combining a lookup table with linear interpolation to get a reasonable initial guess (but maybe that requires too much utility?)
answered Mar 30 '16 at 21:06
jacob1729jacob1729
1565
1565
$begingroup$
lookups and interpolation would require features which won't be available to me at compile time, like you guessed. Is there a simple heuristic I could use to pick a reasonable starting value of k?
$endgroup$
– Nicolas Holthaus
Mar 30 '16 at 21:10
$begingroup$
Do you know what scale numbers you are expecting? If your numbers are going to be in the range 0-100 then you could find a linear function that interpolates in that region and use that.
$endgroup$
– jacob1729
Mar 30 '16 at 21:14
$begingroup$
This is, of course, the OP's proposed scheme. :)
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 21:16
$begingroup$
@AndrewD.Hwang Unless I'm reading it wrong, doesn't the Babylonian approximation substitute k_n for k_n^2 in Newton's formula? That will make it increasingly inaccurate for larger N.
$endgroup$
– jacob1729
Mar 30 '16 at 21:21
1
$begingroup$
Putting the right-hand side of your expression over a common denominator gives $$k_{n+1} = frac{2k_{n}^{2} - (k_{n}^{2} - N)}{2k_{n}} = frac{k_{n} + N/k_{n}}{2}.$$
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 23:52
add a comment |
$begingroup$
lookups and interpolation would require features which won't be available to me at compile time, like you guessed. Is there a simple heuristic I could use to pick a reasonable starting value of k?
$endgroup$
– Nicolas Holthaus
Mar 30 '16 at 21:10
$begingroup$
Do you know what scale numbers you are expecting? If your numbers are going to be in the range 0-100 then you could find a linear function that interpolates in that region and use that.
$endgroup$
– jacob1729
Mar 30 '16 at 21:14
$begingroup$
This is, of course, the OP's proposed scheme. :)
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 21:16
$begingroup$
@AndrewD.Hwang Unless I'm reading it wrong, doesn't the Babylonian approximation substitute k_n for k_n^2 in Newton's formula? That will make it increasingly inaccurate for larger N.
$endgroup$
– jacob1729
Mar 30 '16 at 21:21
1
$begingroup$
Putting the right-hand side of your expression over a common denominator gives $$k_{n+1} = frac{2k_{n}^{2} - (k_{n}^{2} - N)}{2k_{n}} = frac{k_{n} + N/k_{n}}{2}.$$
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 23:52
$begingroup$
lookups and interpolation would require features which won't be available to me at compile time, like you guessed. Is there a simple heuristic I could use to pick a reasonable starting value of k?
$endgroup$
– Nicolas Holthaus
Mar 30 '16 at 21:10
$begingroup$
lookups and interpolation would require features which won't be available to me at compile time, like you guessed. Is there a simple heuristic I could use to pick a reasonable starting value of k?
$endgroup$
– Nicolas Holthaus
Mar 30 '16 at 21:10
$begingroup$
Do you know what scale numbers you are expecting? If your numbers are going to be in the range 0-100 then you could find a linear function that interpolates in that region and use that.
$endgroup$
– jacob1729
Mar 30 '16 at 21:14
$begingroup$
Do you know what scale numbers you are expecting? If your numbers are going to be in the range 0-100 then you could find a linear function that interpolates in that region and use that.
$endgroup$
– jacob1729
Mar 30 '16 at 21:14
$begingroup$
This is, of course, the OP's proposed scheme. :)
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 21:16
$begingroup$
This is, of course, the OP's proposed scheme. :)
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 21:16
$begingroup$
@AndrewD.Hwang Unless I'm reading it wrong, doesn't the Babylonian approximation substitute k_n for k_n^2 in Newton's formula? That will make it increasingly inaccurate for larger N.
$endgroup$
– jacob1729
Mar 30 '16 at 21:21
$begingroup$
@AndrewD.Hwang Unless I'm reading it wrong, doesn't the Babylonian approximation substitute k_n for k_n^2 in Newton's formula? That will make it increasingly inaccurate for larger N.
$endgroup$
– jacob1729
Mar 30 '16 at 21:21
1
1
$begingroup$
Putting the right-hand side of your expression over a common denominator gives $$k_{n+1} = frac{2k_{n}^{2} - (k_{n}^{2} - N)}{2k_{n}} = frac{k_{n} + N/k_{n}}{2}.$$
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 23:52
$begingroup$
Putting the right-hand side of your expression over a common denominator gives $$k_{n+1} = frac{2k_{n}^{2} - (k_{n}^{2} - N)}{2k_{n}} = frac{k_{n} + N/k_{n}}{2}.$$
$endgroup$
– Andrew D. Hwang
Mar 30 '16 at 23:52
add a comment |
$begingroup$
Continuing from previous answers and comments, solving for $x$ equation $f(x)=x^k-a$ (for any power $k$), can be achieved with your criteria using Newton method or, faster, with its variants such as Halley, Householder or even higher order methods they do not have a name).
For example, using Halley method,
$$x_{n+1} = x_n - frac {2 f(x_n) f'(x_n)} {2 {[f'(x_n)]}^2 - f(x_n) f''(x_n)} $$ would give $$x_{n+1}=x_n,frac{ a (k+1)+(k-1) x_n^k}{a (k-1)+(k+1) x_n^k}$$ If $k$ is a whole number, if $a$ is at least rational and $x_0$ rational, all iterates will be rational numbers.
For illustration, let me use $k=5$ and $a=frac {123456}{789}$ and use $x_0=2$. The very first iterates will be $$x_1=frac{8768}{3361}$$ $$x_2=frac{5494130652143802990362144}{2000466367873608426046147}$$
$endgroup$
$begingroup$
You might be interested in the generalizations by Bahman Kalantari.
$endgroup$
– J. M. is not a mathematician
May 17 '16 at 4:16
add a comment |
$begingroup$
Continuing from previous answers and comments, solving for $x$ equation $f(x)=x^k-a$ (for any power $k$), can be achieved with your criteria using Newton method or, faster, with its variants such as Halley, Householder or even higher order methods they do not have a name).
For example, using Halley method,
$$x_{n+1} = x_n - frac {2 f(x_n) f'(x_n)} {2 {[f'(x_n)]}^2 - f(x_n) f''(x_n)} $$ would give $$x_{n+1}=x_n,frac{ a (k+1)+(k-1) x_n^k}{a (k-1)+(k+1) x_n^k}$$ If $k$ is a whole number, if $a$ is at least rational and $x_0$ rational, all iterates will be rational numbers.
For illustration, let me use $k=5$ and $a=frac {123456}{789}$ and use $x_0=2$. The very first iterates will be $$x_1=frac{8768}{3361}$$ $$x_2=frac{5494130652143802990362144}{2000466367873608426046147}$$
$endgroup$
$begingroup$
You might be interested in the generalizations by Bahman Kalantari.
$endgroup$
– J. M. is not a mathematician
May 17 '16 at 4:16
add a comment |
$begingroup$
Continuing from previous answers and comments, solving for $x$ equation $f(x)=x^k-a$ (for any power $k$), can be achieved with your criteria using Newton method or, faster, with its variants such as Halley, Householder or even higher order methods they do not have a name).
For example, using Halley method,
$$x_{n+1} = x_n - frac {2 f(x_n) f'(x_n)} {2 {[f'(x_n)]}^2 - f(x_n) f''(x_n)} $$ would give $$x_{n+1}=x_n,frac{ a (k+1)+(k-1) x_n^k}{a (k-1)+(k+1) x_n^k}$$ If $k$ is a whole number, if $a$ is at least rational and $x_0$ rational, all iterates will be rational numbers.
For illustration, let me use $k=5$ and $a=frac {123456}{789}$ and use $x_0=2$. The very first iterates will be $$x_1=frac{8768}{3361}$$ $$x_2=frac{5494130652143802990362144}{2000466367873608426046147}$$
$endgroup$
Continuing from previous answers and comments, solving for $x$ equation $f(x)=x^k-a$ (for any power $k$), can be achieved with your criteria using Newton method or, faster, with its variants such as Halley, Householder or even higher order methods they do not have a name).
For example, using Halley method,
$$x_{n+1} = x_n - frac {2 f(x_n) f'(x_n)} {2 {[f'(x_n)]}^2 - f(x_n) f''(x_n)} $$ would give $$x_{n+1}=x_n,frac{ a (k+1)+(k-1) x_n^k}{a (k-1)+(k+1) x_n^k}$$ If $k$ is a whole number, if $a$ is at least rational and $x_0$ rational, all iterates will be rational numbers.
For illustration, let me use $k=5$ and $a=frac {123456}{789}$ and use $x_0=2$. The very first iterates will be $$x_1=frac{8768}{3361}$$ $$x_2=frac{5494130652143802990362144}{2000466367873608426046147}$$
answered Apr 1 '16 at 7:57
Claude LeiboviciClaude Leibovici
120k1157132
120k1157132
$begingroup$
You might be interested in the generalizations by Bahman Kalantari.
$endgroup$
– J. M. is not a mathematician
May 17 '16 at 4:16
add a comment |
$begingroup$
You might be interested in the generalizations by Bahman Kalantari.
$endgroup$
– J. M. is not a mathematician
May 17 '16 at 4:16
$begingroup$
You might be interested in the generalizations by Bahman Kalantari.
$endgroup$
– J. M. is not a mathematician
May 17 '16 at 4:16
$begingroup$
You might be interested in the generalizations by Bahman Kalantari.
$endgroup$
– J. M. is not a mathematician
May 17 '16 at 4:16
add a comment |
$begingroup$
Here are some of my own, they don't require iteration; just enter in an $n,$ they are extremely accurate; by some small n it should already be greater than your calculator will display and increasing to limit of $0$ error at infinity, and they meet your criteria.
$sqrt{n}sqrt{(n+1)}approxfrac{2n(n+1)}{2n+1}+frac{2n+1}{4(2n+1)^{2}+1}.$
$sqrt{n^{2}+1}approxfrac{2n(n^{2}+1)}{2n^{2}+1}+frac{2n^{2}+1}{n(4(2n^{2}+1)^{2}+1)}.$
$sqrt{n^{2}-1}approxfrac{2n(n^{2}-1)}{2n^{2}-1}+frac{2n^{2}-1}{n(4(2n^{2}-1)^{2}+1)}.$
$sqrt{n}sqrt{(2n+1)}approx(frac{1}{sqrt{2}})(frac{4n(2n+1)}{4n+1}+frac{4n+1}{4(4n+1)^{2}+1}).$
$sqrt{n}sqrt{(2n-1)}approx(frac{1}{sqrt{2}})(frac{4n(2n-1)}{4n-1}+frac{4n-1}{4(4n-1)^{2}+1}).$
At the very least one of these may be a good start for K. As an example for $n=35,$ calculation of $sqrt{35^{2}+1}$ using the second formula above is correct to 19 decimal places, which is already past your convergence margin of $10$-$14$ decimal places in one simple fraction. And, like I said, that increases for higher $n$.
The only downfall is none of these contain just $n.$
If you need more info on their derivation you can look at my post. "Close approximation of absolute value function"
$endgroup$
add a comment |
$begingroup$
Here are some of my own, they don't require iteration; just enter in an $n,$ they are extremely accurate; by some small n it should already be greater than your calculator will display and increasing to limit of $0$ error at infinity, and they meet your criteria.
$sqrt{n}sqrt{(n+1)}approxfrac{2n(n+1)}{2n+1}+frac{2n+1}{4(2n+1)^{2}+1}.$
$sqrt{n^{2}+1}approxfrac{2n(n^{2}+1)}{2n^{2}+1}+frac{2n^{2}+1}{n(4(2n^{2}+1)^{2}+1)}.$
$sqrt{n^{2}-1}approxfrac{2n(n^{2}-1)}{2n^{2}-1}+frac{2n^{2}-1}{n(4(2n^{2}-1)^{2}+1)}.$
$sqrt{n}sqrt{(2n+1)}approx(frac{1}{sqrt{2}})(frac{4n(2n+1)}{4n+1}+frac{4n+1}{4(4n+1)^{2}+1}).$
$sqrt{n}sqrt{(2n-1)}approx(frac{1}{sqrt{2}})(frac{4n(2n-1)}{4n-1}+frac{4n-1}{4(4n-1)^{2}+1}).$
At the very least one of these may be a good start for K. As an example for $n=35,$ calculation of $sqrt{35^{2}+1}$ using the second formula above is correct to 19 decimal places, which is already past your convergence margin of $10$-$14$ decimal places in one simple fraction. And, like I said, that increases for higher $n$.
The only downfall is none of these contain just $n.$
If you need more info on their derivation you can look at my post. "Close approximation of absolute value function"
$endgroup$
add a comment |
$begingroup$
Here are some of my own, they don't require iteration; just enter in an $n,$ they are extremely accurate; by some small n it should already be greater than your calculator will display and increasing to limit of $0$ error at infinity, and they meet your criteria.
$sqrt{n}sqrt{(n+1)}approxfrac{2n(n+1)}{2n+1}+frac{2n+1}{4(2n+1)^{2}+1}.$
$sqrt{n^{2}+1}approxfrac{2n(n^{2}+1)}{2n^{2}+1}+frac{2n^{2}+1}{n(4(2n^{2}+1)^{2}+1)}.$
$sqrt{n^{2}-1}approxfrac{2n(n^{2}-1)}{2n^{2}-1}+frac{2n^{2}-1}{n(4(2n^{2}-1)^{2}+1)}.$
$sqrt{n}sqrt{(2n+1)}approx(frac{1}{sqrt{2}})(frac{4n(2n+1)}{4n+1}+frac{4n+1}{4(4n+1)^{2}+1}).$
$sqrt{n}sqrt{(2n-1)}approx(frac{1}{sqrt{2}})(frac{4n(2n-1)}{4n-1}+frac{4n-1}{4(4n-1)^{2}+1}).$
At the very least one of these may be a good start for K. As an example for $n=35,$ calculation of $sqrt{35^{2}+1}$ using the second formula above is correct to 19 decimal places, which is already past your convergence margin of $10$-$14$ decimal places in one simple fraction. And, like I said, that increases for higher $n$.
The only downfall is none of these contain just $n.$
If you need more info on their derivation you can look at my post. "Close approximation of absolute value function"
$endgroup$
Here are some of my own, they don't require iteration; just enter in an $n,$ they are extremely accurate; by some small n it should already be greater than your calculator will display and increasing to limit of $0$ error at infinity, and they meet your criteria.
$sqrt{n}sqrt{(n+1)}approxfrac{2n(n+1)}{2n+1}+frac{2n+1}{4(2n+1)^{2}+1}.$
$sqrt{n^{2}+1}approxfrac{2n(n^{2}+1)}{2n^{2}+1}+frac{2n^{2}+1}{n(4(2n^{2}+1)^{2}+1)}.$
$sqrt{n^{2}-1}approxfrac{2n(n^{2}-1)}{2n^{2}-1}+frac{2n^{2}-1}{n(4(2n^{2}-1)^{2}+1)}.$
$sqrt{n}sqrt{(2n+1)}approx(frac{1}{sqrt{2}})(frac{4n(2n+1)}{4n+1}+frac{4n+1}{4(4n+1)^{2}+1}).$
$sqrt{n}sqrt{(2n-1)}approx(frac{1}{sqrt{2}})(frac{4n(2n-1)}{4n-1}+frac{4n-1}{4(4n-1)^{2}+1}).$
At the very least one of these may be a good start for K. As an example for $n=35,$ calculation of $sqrt{35^{2}+1}$ using the second formula above is correct to 19 decimal places, which is already past your convergence margin of $10$-$14$ decimal places in one simple fraction. And, like I said, that increases for higher $n$.
The only downfall is none of these contain just $n.$
If you need more info on their derivation you can look at my post. "Close approximation of absolute value function"
answered May 17 '16 at 3:43
e2theipi2026e2theipi2026
415311
415311
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1720860%2frational-approximation-of-square-roots%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown