Bounding error of Jacobi method in practical application
$begingroup$
$newcommand{vb}[1]{boldsymbol{#1}}$
$newcommand{R}{mathbb{R}}$
$newcommand{matspace}{mathcal{M}_{ntimes n}(R)}$
$newcommand{diag}{mathrm{diag}}$
$newcommand{defeq}{stackrel{scriptsizetext{def}}{=}}$
$newcommand{N}{mathbb{N}}$
$newcommand{norm}[1]{left| #1 right|_{infty}}$
I was reading in Frank Poole's Introduction to Linear Algebra, section 7.2, a passage on the convergence of the Jacobi method for diagonally dominant matrices. To use the same notation as the book, the system we are trying to solve is $$Avb{x} = vb{b},,$$
where $Ainmatspace$, $vb{x}inR^n$ and $vb{b}inR^n$. Also, we define $Dinmatspace$ as the diagonal matrix containing the elements in $A$'s diagonal, $Uinmatspace$ as the upper triangular matrix containing the elements above the diagonal, and $Linmatspace$ as the lower triangular matrix containing the elements below the diagonal—such that $A = D + L + U, $. To simplify notation, we define $G defeq -D^{-1}(L + U)$ and $vb{c} defeq -D^{-1}vb{b}$.
The equation $Avb{x} = vb{x}$ is equivalent to
$$
vb{x} = Gvb{x} + vb{c},;
$$
the iterates of the Jacobi method are thus obtained as
$$
vb{x}^{k+1} = Gvb{x}^k + vb{c}qquad forall k inN ,.
$$
It can be proven that
$$norm{vb{x}-vb{x}^{k+1}} le norm{G}norm{vb{x}^{k+1} - vb{x}^k} < norm{vb{x}^{k+1} - vb{x}^k}$$ since $norm{G}<1$ (which can be proven through the fact that $A$ is diagonally dominant), implying that the sequence of iterates is convergent. The inequality above also gives rise, when used recursively, to the fact that
$$
norm{vb{x}-vb{x}^{k}} le norm{G}^knorm{vb{x} - vb{x}^0},,
$$
which can be used to bound the error of an approximation to the desired precision (by finding a lower bound for the number of iterations).
Let $varepsilon in R^+$, and suppose we want to ensure $norm{vb{x}-vb{x}^{k}} < varepsilon$. It would be sufficient to ensure
$$
norm{G}^knorm{vb{x} - vb{x}^0} < varepsilon quadLeftrightarrowquad k > frac{logvarepsilon - lognorm{vb{x} - vb{x}^0}}{lognorm{G}} = frac{lognorm{vb{x} - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}}
$$
(the last equality is just for convenience, since $norm{G} < 1$ implies $lognorm{G}^{-1} > 0$).
Now, the problem is the expression $norm{vb{x} - vb{x}^0},$: if we were to use it to estimate the amount of iterations we needed to make, we would need to know the "exact" solution, $vb{x}$, which is impractical. So, in a practical situation, we would need to estimate that expression. Concretely, we would need to find an upper bound $MinR^+,$, such that $norm{vb{x} - vb{x}^0} le M,.$ Indeed, the inequality
$$
k > frac{log M - logvarepsilon}{lognorm{G}^{-1}}
$$
would necessarily imply
$$
k > frac{lognorm{vb{x} - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}},,
$$
and thus, through all our previous reasoning, the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$ would be satisfied—so we could set the number of iterations to
$$
k = leftlceil frac{log M - logvarepsilon}{lognorm{G}^{-1}} rightrceil,.
$$
But the author of the book simply takes $norm{vb{x} - vb{x}^0} approx norm{vb{x}^1 - vb{x}^0},$, without making sure that $norm{vb{x} - vb{x}^0} le norm{vb{x}^1 - vb{x}^0},$. Wouldn't that allow the possibility that $norm{vb{x} - vb{x}^0} > norm{vb{x}^1 - vb{x}^0},,$ and thus imply the possibility of the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$ not being satisfied (by taking $k = leftlceilfrac{log norm{vb{x}^1 - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}} rightrceil$)?
If so, what would be a way of finding an actual upper bound approximation of $norm{vb{x}-vb{x}^{0}} = norm{vb{x}}$ (supposing $vb{x}^0=vb{0}$), therefore guaranteeing the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$?
linear-algebra numerical-methods numerical-linear-algebra
$endgroup$
add a comment |
$begingroup$
$newcommand{vb}[1]{boldsymbol{#1}}$
$newcommand{R}{mathbb{R}}$
$newcommand{matspace}{mathcal{M}_{ntimes n}(R)}$
$newcommand{diag}{mathrm{diag}}$
$newcommand{defeq}{stackrel{scriptsizetext{def}}{=}}$
$newcommand{N}{mathbb{N}}$
$newcommand{norm}[1]{left| #1 right|_{infty}}$
I was reading in Frank Poole's Introduction to Linear Algebra, section 7.2, a passage on the convergence of the Jacobi method for diagonally dominant matrices. To use the same notation as the book, the system we are trying to solve is $$Avb{x} = vb{b},,$$
where $Ainmatspace$, $vb{x}inR^n$ and $vb{b}inR^n$. Also, we define $Dinmatspace$ as the diagonal matrix containing the elements in $A$'s diagonal, $Uinmatspace$ as the upper triangular matrix containing the elements above the diagonal, and $Linmatspace$ as the lower triangular matrix containing the elements below the diagonal—such that $A = D + L + U, $. To simplify notation, we define $G defeq -D^{-1}(L + U)$ and $vb{c} defeq -D^{-1}vb{b}$.
The equation $Avb{x} = vb{x}$ is equivalent to
$$
vb{x} = Gvb{x} + vb{c},;
$$
the iterates of the Jacobi method are thus obtained as
$$
vb{x}^{k+1} = Gvb{x}^k + vb{c}qquad forall k inN ,.
$$
It can be proven that
$$norm{vb{x}-vb{x}^{k+1}} le norm{G}norm{vb{x}^{k+1} - vb{x}^k} < norm{vb{x}^{k+1} - vb{x}^k}$$ since $norm{G}<1$ (which can be proven through the fact that $A$ is diagonally dominant), implying that the sequence of iterates is convergent. The inequality above also gives rise, when used recursively, to the fact that
$$
norm{vb{x}-vb{x}^{k}} le norm{G}^knorm{vb{x} - vb{x}^0},,
$$
which can be used to bound the error of an approximation to the desired precision (by finding a lower bound for the number of iterations).
Let $varepsilon in R^+$, and suppose we want to ensure $norm{vb{x}-vb{x}^{k}} < varepsilon$. It would be sufficient to ensure
$$
norm{G}^knorm{vb{x} - vb{x}^0} < varepsilon quadLeftrightarrowquad k > frac{logvarepsilon - lognorm{vb{x} - vb{x}^0}}{lognorm{G}} = frac{lognorm{vb{x} - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}}
$$
(the last equality is just for convenience, since $norm{G} < 1$ implies $lognorm{G}^{-1} > 0$).
Now, the problem is the expression $norm{vb{x} - vb{x}^0},$: if we were to use it to estimate the amount of iterations we needed to make, we would need to know the "exact" solution, $vb{x}$, which is impractical. So, in a practical situation, we would need to estimate that expression. Concretely, we would need to find an upper bound $MinR^+,$, such that $norm{vb{x} - vb{x}^0} le M,.$ Indeed, the inequality
$$
k > frac{log M - logvarepsilon}{lognorm{G}^{-1}}
$$
would necessarily imply
$$
k > frac{lognorm{vb{x} - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}},,
$$
and thus, through all our previous reasoning, the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$ would be satisfied—so we could set the number of iterations to
$$
k = leftlceil frac{log M - logvarepsilon}{lognorm{G}^{-1}} rightrceil,.
$$
But the author of the book simply takes $norm{vb{x} - vb{x}^0} approx norm{vb{x}^1 - vb{x}^0},$, without making sure that $norm{vb{x} - vb{x}^0} le norm{vb{x}^1 - vb{x}^0},$. Wouldn't that allow the possibility that $norm{vb{x} - vb{x}^0} > norm{vb{x}^1 - vb{x}^0},,$ and thus imply the possibility of the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$ not being satisfied (by taking $k = leftlceilfrac{log norm{vb{x}^1 - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}} rightrceil$)?
If so, what would be a way of finding an actual upper bound approximation of $norm{vb{x}-vb{x}^{0}} = norm{vb{x}}$ (supposing $vb{x}^0=vb{0}$), therefore guaranteeing the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$?
linear-algebra numerical-methods numerical-linear-algebra
$endgroup$
$begingroup$
What can you say about the decay of the residual $r_j = b - Ax_j$? Can you establish a connection between the size of the residual and the size of the error? Can you involve the dominance factor?
$endgroup$
– Carl Christian
Jan 25 at 0:25
$begingroup$
Probably more useful is to require that $|x-x_k|/|x|leq epsilon$ which is easy to obtain with $x_0=0$ knowing the norm of the iteration matrix and gives roughly an estimate on the number of valid digits in the solution (assuming the problem is well scaled so that the values of $x$ do not vary much in magnitude).
$endgroup$
– Algebraic Pavel
Jan 25 at 13:35
$begingroup$
Take $k=0$ in the first of your estimates and you’ll get what you want
$endgroup$
– VorKir
Feb 3 at 5:19
add a comment |
$begingroup$
$newcommand{vb}[1]{boldsymbol{#1}}$
$newcommand{R}{mathbb{R}}$
$newcommand{matspace}{mathcal{M}_{ntimes n}(R)}$
$newcommand{diag}{mathrm{diag}}$
$newcommand{defeq}{stackrel{scriptsizetext{def}}{=}}$
$newcommand{N}{mathbb{N}}$
$newcommand{norm}[1]{left| #1 right|_{infty}}$
I was reading in Frank Poole's Introduction to Linear Algebra, section 7.2, a passage on the convergence of the Jacobi method for diagonally dominant matrices. To use the same notation as the book, the system we are trying to solve is $$Avb{x} = vb{b},,$$
where $Ainmatspace$, $vb{x}inR^n$ and $vb{b}inR^n$. Also, we define $Dinmatspace$ as the diagonal matrix containing the elements in $A$'s diagonal, $Uinmatspace$ as the upper triangular matrix containing the elements above the diagonal, and $Linmatspace$ as the lower triangular matrix containing the elements below the diagonal—such that $A = D + L + U, $. To simplify notation, we define $G defeq -D^{-1}(L + U)$ and $vb{c} defeq -D^{-1}vb{b}$.
The equation $Avb{x} = vb{x}$ is equivalent to
$$
vb{x} = Gvb{x} + vb{c},;
$$
the iterates of the Jacobi method are thus obtained as
$$
vb{x}^{k+1} = Gvb{x}^k + vb{c}qquad forall k inN ,.
$$
It can be proven that
$$norm{vb{x}-vb{x}^{k+1}} le norm{G}norm{vb{x}^{k+1} - vb{x}^k} < norm{vb{x}^{k+1} - vb{x}^k}$$ since $norm{G}<1$ (which can be proven through the fact that $A$ is diagonally dominant), implying that the sequence of iterates is convergent. The inequality above also gives rise, when used recursively, to the fact that
$$
norm{vb{x}-vb{x}^{k}} le norm{G}^knorm{vb{x} - vb{x}^0},,
$$
which can be used to bound the error of an approximation to the desired precision (by finding a lower bound for the number of iterations).
Let $varepsilon in R^+$, and suppose we want to ensure $norm{vb{x}-vb{x}^{k}} < varepsilon$. It would be sufficient to ensure
$$
norm{G}^knorm{vb{x} - vb{x}^0} < varepsilon quadLeftrightarrowquad k > frac{logvarepsilon - lognorm{vb{x} - vb{x}^0}}{lognorm{G}} = frac{lognorm{vb{x} - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}}
$$
(the last equality is just for convenience, since $norm{G} < 1$ implies $lognorm{G}^{-1} > 0$).
Now, the problem is the expression $norm{vb{x} - vb{x}^0},$: if we were to use it to estimate the amount of iterations we needed to make, we would need to know the "exact" solution, $vb{x}$, which is impractical. So, in a practical situation, we would need to estimate that expression. Concretely, we would need to find an upper bound $MinR^+,$, such that $norm{vb{x} - vb{x}^0} le M,.$ Indeed, the inequality
$$
k > frac{log M - logvarepsilon}{lognorm{G}^{-1}}
$$
would necessarily imply
$$
k > frac{lognorm{vb{x} - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}},,
$$
and thus, through all our previous reasoning, the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$ would be satisfied—so we could set the number of iterations to
$$
k = leftlceil frac{log M - logvarepsilon}{lognorm{G}^{-1}} rightrceil,.
$$
But the author of the book simply takes $norm{vb{x} - vb{x}^0} approx norm{vb{x}^1 - vb{x}^0},$, without making sure that $norm{vb{x} - vb{x}^0} le norm{vb{x}^1 - vb{x}^0},$. Wouldn't that allow the possibility that $norm{vb{x} - vb{x}^0} > norm{vb{x}^1 - vb{x}^0},,$ and thus imply the possibility of the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$ not being satisfied (by taking $k = leftlceilfrac{log norm{vb{x}^1 - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}} rightrceil$)?
If so, what would be a way of finding an actual upper bound approximation of $norm{vb{x}-vb{x}^{0}} = norm{vb{x}}$ (supposing $vb{x}^0=vb{0}$), therefore guaranteeing the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$?
linear-algebra numerical-methods numerical-linear-algebra
$endgroup$
$newcommand{vb}[1]{boldsymbol{#1}}$
$newcommand{R}{mathbb{R}}$
$newcommand{matspace}{mathcal{M}_{ntimes n}(R)}$
$newcommand{diag}{mathrm{diag}}$
$newcommand{defeq}{stackrel{scriptsizetext{def}}{=}}$
$newcommand{N}{mathbb{N}}$
$newcommand{norm}[1]{left| #1 right|_{infty}}$
I was reading in Frank Poole's Introduction to Linear Algebra, section 7.2, a passage on the convergence of the Jacobi method for diagonally dominant matrices. To use the same notation as the book, the system we are trying to solve is $$Avb{x} = vb{b},,$$
where $Ainmatspace$, $vb{x}inR^n$ and $vb{b}inR^n$. Also, we define $Dinmatspace$ as the diagonal matrix containing the elements in $A$'s diagonal, $Uinmatspace$ as the upper triangular matrix containing the elements above the diagonal, and $Linmatspace$ as the lower triangular matrix containing the elements below the diagonal—such that $A = D + L + U, $. To simplify notation, we define $G defeq -D^{-1}(L + U)$ and $vb{c} defeq -D^{-1}vb{b}$.
The equation $Avb{x} = vb{x}$ is equivalent to
$$
vb{x} = Gvb{x} + vb{c},;
$$
the iterates of the Jacobi method are thus obtained as
$$
vb{x}^{k+1} = Gvb{x}^k + vb{c}qquad forall k inN ,.
$$
It can be proven that
$$norm{vb{x}-vb{x}^{k+1}} le norm{G}norm{vb{x}^{k+1} - vb{x}^k} < norm{vb{x}^{k+1} - vb{x}^k}$$ since $norm{G}<1$ (which can be proven through the fact that $A$ is diagonally dominant), implying that the sequence of iterates is convergent. The inequality above also gives rise, when used recursively, to the fact that
$$
norm{vb{x}-vb{x}^{k}} le norm{G}^knorm{vb{x} - vb{x}^0},,
$$
which can be used to bound the error of an approximation to the desired precision (by finding a lower bound for the number of iterations).
Let $varepsilon in R^+$, and suppose we want to ensure $norm{vb{x}-vb{x}^{k}} < varepsilon$. It would be sufficient to ensure
$$
norm{G}^knorm{vb{x} - vb{x}^0} < varepsilon quadLeftrightarrowquad k > frac{logvarepsilon - lognorm{vb{x} - vb{x}^0}}{lognorm{G}} = frac{lognorm{vb{x} - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}}
$$
(the last equality is just for convenience, since $norm{G} < 1$ implies $lognorm{G}^{-1} > 0$).
Now, the problem is the expression $norm{vb{x} - vb{x}^0},$: if we were to use it to estimate the amount of iterations we needed to make, we would need to know the "exact" solution, $vb{x}$, which is impractical. So, in a practical situation, we would need to estimate that expression. Concretely, we would need to find an upper bound $MinR^+,$, such that $norm{vb{x} - vb{x}^0} le M,.$ Indeed, the inequality
$$
k > frac{log M - logvarepsilon}{lognorm{G}^{-1}}
$$
would necessarily imply
$$
k > frac{lognorm{vb{x} - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}},,
$$
and thus, through all our previous reasoning, the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$ would be satisfied—so we could set the number of iterations to
$$
k = leftlceil frac{log M - logvarepsilon}{lognorm{G}^{-1}} rightrceil,.
$$
But the author of the book simply takes $norm{vb{x} - vb{x}^0} approx norm{vb{x}^1 - vb{x}^0},$, without making sure that $norm{vb{x} - vb{x}^0} le norm{vb{x}^1 - vb{x}^0},$. Wouldn't that allow the possibility that $norm{vb{x} - vb{x}^0} > norm{vb{x}^1 - vb{x}^0},,$ and thus imply the possibility of the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$ not being satisfied (by taking $k = leftlceilfrac{log norm{vb{x}^1 - vb{x}^0} - logvarepsilon}{lognorm{G}^{-1}} rightrceil$)?
If so, what would be a way of finding an actual upper bound approximation of $norm{vb{x}-vb{x}^{0}} = norm{vb{x}}$ (supposing $vb{x}^0=vb{0}$), therefore guaranteeing the condition $norm{vb{x}-vb{x}^{k}} < varepsilon$?
linear-algebra numerical-methods numerical-linear-algebra
linear-algebra numerical-methods numerical-linear-algebra
edited Jan 24 at 17:35
Anakhand
asked Jan 24 at 15:38
AnakhandAnakhand
259114
259114
$begingroup$
What can you say about the decay of the residual $r_j = b - Ax_j$? Can you establish a connection between the size of the residual and the size of the error? Can you involve the dominance factor?
$endgroup$
– Carl Christian
Jan 25 at 0:25
$begingroup$
Probably more useful is to require that $|x-x_k|/|x|leq epsilon$ which is easy to obtain with $x_0=0$ knowing the norm of the iteration matrix and gives roughly an estimate on the number of valid digits in the solution (assuming the problem is well scaled so that the values of $x$ do not vary much in magnitude).
$endgroup$
– Algebraic Pavel
Jan 25 at 13:35
$begingroup$
Take $k=0$ in the first of your estimates and you’ll get what you want
$endgroup$
– VorKir
Feb 3 at 5:19
add a comment |
$begingroup$
What can you say about the decay of the residual $r_j = b - Ax_j$? Can you establish a connection between the size of the residual and the size of the error? Can you involve the dominance factor?
$endgroup$
– Carl Christian
Jan 25 at 0:25
$begingroup$
Probably more useful is to require that $|x-x_k|/|x|leq epsilon$ which is easy to obtain with $x_0=0$ knowing the norm of the iteration matrix and gives roughly an estimate on the number of valid digits in the solution (assuming the problem is well scaled so that the values of $x$ do not vary much in magnitude).
$endgroup$
– Algebraic Pavel
Jan 25 at 13:35
$begingroup$
Take $k=0$ in the first of your estimates and you’ll get what you want
$endgroup$
– VorKir
Feb 3 at 5:19
$begingroup$
What can you say about the decay of the residual $r_j = b - Ax_j$? Can you establish a connection between the size of the residual and the size of the error? Can you involve the dominance factor?
$endgroup$
– Carl Christian
Jan 25 at 0:25
$begingroup$
What can you say about the decay of the residual $r_j = b - Ax_j$? Can you establish a connection between the size of the residual and the size of the error? Can you involve the dominance factor?
$endgroup$
– Carl Christian
Jan 25 at 0:25
$begingroup$
Probably more useful is to require that $|x-x_k|/|x|leq epsilon$ which is easy to obtain with $x_0=0$ knowing the norm of the iteration matrix and gives roughly an estimate on the number of valid digits in the solution (assuming the problem is well scaled so that the values of $x$ do not vary much in magnitude).
$endgroup$
– Algebraic Pavel
Jan 25 at 13:35
$begingroup$
Probably more useful is to require that $|x-x_k|/|x|leq epsilon$ which is easy to obtain with $x_0=0$ knowing the norm of the iteration matrix and gives roughly an estimate on the number of valid digits in the solution (assuming the problem is well scaled so that the values of $x$ do not vary much in magnitude).
$endgroup$
– Algebraic Pavel
Jan 25 at 13:35
$begingroup$
Take $k=0$ in the first of your estimates and you’ll get what you want
$endgroup$
– VorKir
Feb 3 at 5:19
$begingroup$
Take $k=0$ in the first of your estimates and you’ll get what you want
$endgroup$
– VorKir
Feb 3 at 5:19
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3086003%2fbounding-error-of-jacobi-method-in-practical-application%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3086003%2fbounding-error-of-jacobi-method-in-practical-application%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
What can you say about the decay of the residual $r_j = b - Ax_j$? Can you establish a connection between the size of the residual and the size of the error? Can you involve the dominance factor?
$endgroup$
– Carl Christian
Jan 25 at 0:25
$begingroup$
Probably more useful is to require that $|x-x_k|/|x|leq epsilon$ which is easy to obtain with $x_0=0$ knowing the norm of the iteration matrix and gives roughly an estimate on the number of valid digits in the solution (assuming the problem is well scaled so that the values of $x$ do not vary much in magnitude).
$endgroup$
– Algebraic Pavel
Jan 25 at 13:35
$begingroup$
Take $k=0$ in the first of your estimates and you’ll get what you want
$endgroup$
– VorKir
Feb 3 at 5:19