variational formulation of second order differential equation
$begingroup$
I am given the following differential equation.
Let $Omega = (a,b)subsetmathbb{R}, f:Omega rightarrowmathbb{R}, alpha,beta in mathbb{R}$ and
$$
-u'' + u = f \
u(a)= alpha, u(b) = beta
$$
Since this is an inhomogeneous problem, I choose a function $u_varphi$ with $u_varphi|_Gamma = u|_Gamma$ such that I can treat the problem like a homogeneous one, by finding a function $u_0$ with $u_0(a)=u_0(b)=0$, such that
$$u = u_varphi + u_0.$$
The DEQ can then be written as
$$
-(u_0 + u_varphi)'' + u_0 + u_varphi = f \
$$
To find the variational formulation for $u_0$, I test with an arbitrary testfunction $v$ with compact support:
$$int_Omega (-(u_0 + u_varphi)'' + u_0 + u_varphi)v dx= int_Omega fvdx
$$
yielding
$$int_Omega (-u_0''+u_0)v dx=int_Omega nabla u_0 nabla v dx + int_Omega u_0v dx = int_Omega (f +u_varphi'' - u_varphi)vdx
$$
which we usually write compactly as
$$
a(u_0,v) + int_Omega u_0v dx = F(v).
$$
This integral in the last equation bothers me, since I cannot get rid of it. Is there a way to do so?
Thanks!
functional-analysis ordinary-differential-equations distribution-theory finite-element-method
$endgroup$
add a comment |
$begingroup$
I am given the following differential equation.
Let $Omega = (a,b)subsetmathbb{R}, f:Omega rightarrowmathbb{R}, alpha,beta in mathbb{R}$ and
$$
-u'' + u = f \
u(a)= alpha, u(b) = beta
$$
Since this is an inhomogeneous problem, I choose a function $u_varphi$ with $u_varphi|_Gamma = u|_Gamma$ such that I can treat the problem like a homogeneous one, by finding a function $u_0$ with $u_0(a)=u_0(b)=0$, such that
$$u = u_varphi + u_0.$$
The DEQ can then be written as
$$
-(u_0 + u_varphi)'' + u_0 + u_varphi = f \
$$
To find the variational formulation for $u_0$, I test with an arbitrary testfunction $v$ with compact support:
$$int_Omega (-(u_0 + u_varphi)'' + u_0 + u_varphi)v dx= int_Omega fvdx
$$
yielding
$$int_Omega (-u_0''+u_0)v dx=int_Omega nabla u_0 nabla v dx + int_Omega u_0v dx = int_Omega (f +u_varphi'' - u_varphi)vdx
$$
which we usually write compactly as
$$
a(u_0,v) + int_Omega u_0v dx = F(v).
$$
This integral in the last equation bothers me, since I cannot get rid of it. Is there a way to do so?
Thanks!
functional-analysis ordinary-differential-equations distribution-theory finite-element-method
$endgroup$
$begingroup$
Why do you want to get rid of this integral?
$endgroup$
– gerw
Jan 11 at 8:18
$begingroup$
I assumed, this stands in the way of solving the problem. But the left side can be written as $a^*(u_0,v)$ with $a^*$ being a bilinear form.
$endgroup$
– dba
Jan 11 at 14:12
add a comment |
$begingroup$
I am given the following differential equation.
Let $Omega = (a,b)subsetmathbb{R}, f:Omega rightarrowmathbb{R}, alpha,beta in mathbb{R}$ and
$$
-u'' + u = f \
u(a)= alpha, u(b) = beta
$$
Since this is an inhomogeneous problem, I choose a function $u_varphi$ with $u_varphi|_Gamma = u|_Gamma$ such that I can treat the problem like a homogeneous one, by finding a function $u_0$ with $u_0(a)=u_0(b)=0$, such that
$$u = u_varphi + u_0.$$
The DEQ can then be written as
$$
-(u_0 + u_varphi)'' + u_0 + u_varphi = f \
$$
To find the variational formulation for $u_0$, I test with an arbitrary testfunction $v$ with compact support:
$$int_Omega (-(u_0 + u_varphi)'' + u_0 + u_varphi)v dx= int_Omega fvdx
$$
yielding
$$int_Omega (-u_0''+u_0)v dx=int_Omega nabla u_0 nabla v dx + int_Omega u_0v dx = int_Omega (f +u_varphi'' - u_varphi)vdx
$$
which we usually write compactly as
$$
a(u_0,v) + int_Omega u_0v dx = F(v).
$$
This integral in the last equation bothers me, since I cannot get rid of it. Is there a way to do so?
Thanks!
functional-analysis ordinary-differential-equations distribution-theory finite-element-method
$endgroup$
I am given the following differential equation.
Let $Omega = (a,b)subsetmathbb{R}, f:Omega rightarrowmathbb{R}, alpha,beta in mathbb{R}$ and
$$
-u'' + u = f \
u(a)= alpha, u(b) = beta
$$
Since this is an inhomogeneous problem, I choose a function $u_varphi$ with $u_varphi|_Gamma = u|_Gamma$ such that I can treat the problem like a homogeneous one, by finding a function $u_0$ with $u_0(a)=u_0(b)=0$, such that
$$u = u_varphi + u_0.$$
The DEQ can then be written as
$$
-(u_0 + u_varphi)'' + u_0 + u_varphi = f \
$$
To find the variational formulation for $u_0$, I test with an arbitrary testfunction $v$ with compact support:
$$int_Omega (-(u_0 + u_varphi)'' + u_0 + u_varphi)v dx= int_Omega fvdx
$$
yielding
$$int_Omega (-u_0''+u_0)v dx=int_Omega nabla u_0 nabla v dx + int_Omega u_0v dx = int_Omega (f +u_varphi'' - u_varphi)vdx
$$
which we usually write compactly as
$$
a(u_0,v) + int_Omega u_0v dx = F(v).
$$
This integral in the last equation bothers me, since I cannot get rid of it. Is there a way to do so?
Thanks!
functional-analysis ordinary-differential-equations distribution-theory finite-element-method
functional-analysis ordinary-differential-equations distribution-theory finite-element-method
edited Jan 10 at 21:12
Bernard
120k740115
120k740115
asked Jan 10 at 21:09
dbadba
306110
306110
$begingroup$
Why do you want to get rid of this integral?
$endgroup$
– gerw
Jan 11 at 8:18
$begingroup$
I assumed, this stands in the way of solving the problem. But the left side can be written as $a^*(u_0,v)$ with $a^*$ being a bilinear form.
$endgroup$
– dba
Jan 11 at 14:12
add a comment |
$begingroup$
Why do you want to get rid of this integral?
$endgroup$
– gerw
Jan 11 at 8:18
$begingroup$
I assumed, this stands in the way of solving the problem. But the left side can be written as $a^*(u_0,v)$ with $a^*$ being a bilinear form.
$endgroup$
– dba
Jan 11 at 14:12
$begingroup$
Why do you want to get rid of this integral?
$endgroup$
– gerw
Jan 11 at 8:18
$begingroup$
Why do you want to get rid of this integral?
$endgroup$
– gerw
Jan 11 at 8:18
$begingroup$
I assumed, this stands in the way of solving the problem. But the left side can be written as $a^*(u_0,v)$ with $a^*$ being a bilinear form.
$endgroup$
– dba
Jan 11 at 14:12
$begingroup$
I assumed, this stands in the way of solving the problem. But the left side can be written as $a^*(u_0,v)$ with $a^*$ being a bilinear form.
$endgroup$
– dba
Jan 11 at 14:12
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The problem at hand can be reduced to a (somewhat more general) normed problem:
$$
frac{d^2 T}{dxi^2} - p^2 T(xi) = F(xi)
$$
The left hand side of this normed problem is handled with help of the following references:
- Understanding Galerkin method of weighted residuals
- Are there any two-dimensional quadrature that only uses the values at the vertices of triangles?
The second reference shows that vertex integration is the most stable one. If we employ this for the right hand side, then the integral
$$
int_0^1 F(xi)f(xi),dxi
$$
results in a load vector $vec{F}$ instead of $0$ . Giving for the system of equations as a whole (read the first reference):
$$
begin{bmatrix} E_{0,0}^{(1)} & E_{0,1}^{(1)} & 0 & 0 & 0 & cdots \
E_{1,0}^{(1)} & E_{1,1}^{(1)}+E_{0,0}^{(2)} & E_{0,1}^{(2)} & 0 & 0 & cdots \
0 & E_{1,0}^{(2)} & E_{1,1}^{(2)}+E_{0,0}^{(3)} & E_{0,1}^{(3)} & 0 & cdots \
0 & 0 & E_{1,0}^{(3)} & E_{1,1}^{(3)}+E_{0,0}^{(4)} & E_{0,1}^{(4)} & cdots \
cdots & cdots & cdots & cdots & cdots & cdots end{bmatrix}
begin{bmatrix} T_1 \ T_2 \ T_3 \ T_4 \ T_5 \ cdots end{bmatrix} =
begin{bmatrix} F_1 \ F_2 \ F_3 \ F_4 \ F_5 \ cdots end{bmatrix}
$$
with the boundary conditions properly imposed.
The original problem - with $x$ and $u$ instead of $xi$ and $T$ - is recovered by employing the following transformations.
Herewith: $xi_k ;rightarrow; x_k$ and $T_k ;rightarrow; u_k$ :
$$
x = (b-a)xi+a quad Longrightarrow quad
begin{cases} x = a ;leftrightarrow; xi = 0 \ x = b ;leftrightarrow; xi = 1 end{cases}
\
u = (beta-alpha)T+alpha quad Longrightarrow quad
begin{cases} u = alpha ;leftrightarrow; T = 0 \ u = beta ;leftrightarrow; T = 1 end{cases}
$$
Note. Variational formulation and Galerkin method are the same in this case.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3069174%2fvariational-formulation-of-second-order-differential-equation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The problem at hand can be reduced to a (somewhat more general) normed problem:
$$
frac{d^2 T}{dxi^2} - p^2 T(xi) = F(xi)
$$
The left hand side of this normed problem is handled with help of the following references:
- Understanding Galerkin method of weighted residuals
- Are there any two-dimensional quadrature that only uses the values at the vertices of triangles?
The second reference shows that vertex integration is the most stable one. If we employ this for the right hand side, then the integral
$$
int_0^1 F(xi)f(xi),dxi
$$
results in a load vector $vec{F}$ instead of $0$ . Giving for the system of equations as a whole (read the first reference):
$$
begin{bmatrix} E_{0,0}^{(1)} & E_{0,1}^{(1)} & 0 & 0 & 0 & cdots \
E_{1,0}^{(1)} & E_{1,1}^{(1)}+E_{0,0}^{(2)} & E_{0,1}^{(2)} & 0 & 0 & cdots \
0 & E_{1,0}^{(2)} & E_{1,1}^{(2)}+E_{0,0}^{(3)} & E_{0,1}^{(3)} & 0 & cdots \
0 & 0 & E_{1,0}^{(3)} & E_{1,1}^{(3)}+E_{0,0}^{(4)} & E_{0,1}^{(4)} & cdots \
cdots & cdots & cdots & cdots & cdots & cdots end{bmatrix}
begin{bmatrix} T_1 \ T_2 \ T_3 \ T_4 \ T_5 \ cdots end{bmatrix} =
begin{bmatrix} F_1 \ F_2 \ F_3 \ F_4 \ F_5 \ cdots end{bmatrix}
$$
with the boundary conditions properly imposed.
The original problem - with $x$ and $u$ instead of $xi$ and $T$ - is recovered by employing the following transformations.
Herewith: $xi_k ;rightarrow; x_k$ and $T_k ;rightarrow; u_k$ :
$$
x = (b-a)xi+a quad Longrightarrow quad
begin{cases} x = a ;leftrightarrow; xi = 0 \ x = b ;leftrightarrow; xi = 1 end{cases}
\
u = (beta-alpha)T+alpha quad Longrightarrow quad
begin{cases} u = alpha ;leftrightarrow; T = 0 \ u = beta ;leftrightarrow; T = 1 end{cases}
$$
Note. Variational formulation and Galerkin method are the same in this case.
$endgroup$
add a comment |
$begingroup$
The problem at hand can be reduced to a (somewhat more general) normed problem:
$$
frac{d^2 T}{dxi^2} - p^2 T(xi) = F(xi)
$$
The left hand side of this normed problem is handled with help of the following references:
- Understanding Galerkin method of weighted residuals
- Are there any two-dimensional quadrature that only uses the values at the vertices of triangles?
The second reference shows that vertex integration is the most stable one. If we employ this for the right hand side, then the integral
$$
int_0^1 F(xi)f(xi),dxi
$$
results in a load vector $vec{F}$ instead of $0$ . Giving for the system of equations as a whole (read the first reference):
$$
begin{bmatrix} E_{0,0}^{(1)} & E_{0,1}^{(1)} & 0 & 0 & 0 & cdots \
E_{1,0}^{(1)} & E_{1,1}^{(1)}+E_{0,0}^{(2)} & E_{0,1}^{(2)} & 0 & 0 & cdots \
0 & E_{1,0}^{(2)} & E_{1,1}^{(2)}+E_{0,0}^{(3)} & E_{0,1}^{(3)} & 0 & cdots \
0 & 0 & E_{1,0}^{(3)} & E_{1,1}^{(3)}+E_{0,0}^{(4)} & E_{0,1}^{(4)} & cdots \
cdots & cdots & cdots & cdots & cdots & cdots end{bmatrix}
begin{bmatrix} T_1 \ T_2 \ T_3 \ T_4 \ T_5 \ cdots end{bmatrix} =
begin{bmatrix} F_1 \ F_2 \ F_3 \ F_4 \ F_5 \ cdots end{bmatrix}
$$
with the boundary conditions properly imposed.
The original problem - with $x$ and $u$ instead of $xi$ and $T$ - is recovered by employing the following transformations.
Herewith: $xi_k ;rightarrow; x_k$ and $T_k ;rightarrow; u_k$ :
$$
x = (b-a)xi+a quad Longrightarrow quad
begin{cases} x = a ;leftrightarrow; xi = 0 \ x = b ;leftrightarrow; xi = 1 end{cases}
\
u = (beta-alpha)T+alpha quad Longrightarrow quad
begin{cases} u = alpha ;leftrightarrow; T = 0 \ u = beta ;leftrightarrow; T = 1 end{cases}
$$
Note. Variational formulation and Galerkin method are the same in this case.
$endgroup$
add a comment |
$begingroup$
The problem at hand can be reduced to a (somewhat more general) normed problem:
$$
frac{d^2 T}{dxi^2} - p^2 T(xi) = F(xi)
$$
The left hand side of this normed problem is handled with help of the following references:
- Understanding Galerkin method of weighted residuals
- Are there any two-dimensional quadrature that only uses the values at the vertices of triangles?
The second reference shows that vertex integration is the most stable one. If we employ this for the right hand side, then the integral
$$
int_0^1 F(xi)f(xi),dxi
$$
results in a load vector $vec{F}$ instead of $0$ . Giving for the system of equations as a whole (read the first reference):
$$
begin{bmatrix} E_{0,0}^{(1)} & E_{0,1}^{(1)} & 0 & 0 & 0 & cdots \
E_{1,0}^{(1)} & E_{1,1}^{(1)}+E_{0,0}^{(2)} & E_{0,1}^{(2)} & 0 & 0 & cdots \
0 & E_{1,0}^{(2)} & E_{1,1}^{(2)}+E_{0,0}^{(3)} & E_{0,1}^{(3)} & 0 & cdots \
0 & 0 & E_{1,0}^{(3)} & E_{1,1}^{(3)}+E_{0,0}^{(4)} & E_{0,1}^{(4)} & cdots \
cdots & cdots & cdots & cdots & cdots & cdots end{bmatrix}
begin{bmatrix} T_1 \ T_2 \ T_3 \ T_4 \ T_5 \ cdots end{bmatrix} =
begin{bmatrix} F_1 \ F_2 \ F_3 \ F_4 \ F_5 \ cdots end{bmatrix}
$$
with the boundary conditions properly imposed.
The original problem - with $x$ and $u$ instead of $xi$ and $T$ - is recovered by employing the following transformations.
Herewith: $xi_k ;rightarrow; x_k$ and $T_k ;rightarrow; u_k$ :
$$
x = (b-a)xi+a quad Longrightarrow quad
begin{cases} x = a ;leftrightarrow; xi = 0 \ x = b ;leftrightarrow; xi = 1 end{cases}
\
u = (beta-alpha)T+alpha quad Longrightarrow quad
begin{cases} u = alpha ;leftrightarrow; T = 0 \ u = beta ;leftrightarrow; T = 1 end{cases}
$$
Note. Variational formulation and Galerkin method are the same in this case.
$endgroup$
The problem at hand can be reduced to a (somewhat more general) normed problem:
$$
frac{d^2 T}{dxi^2} - p^2 T(xi) = F(xi)
$$
The left hand side of this normed problem is handled with help of the following references:
- Understanding Galerkin method of weighted residuals
- Are there any two-dimensional quadrature that only uses the values at the vertices of triangles?
The second reference shows that vertex integration is the most stable one. If we employ this for the right hand side, then the integral
$$
int_0^1 F(xi)f(xi),dxi
$$
results in a load vector $vec{F}$ instead of $0$ . Giving for the system of equations as a whole (read the first reference):
$$
begin{bmatrix} E_{0,0}^{(1)} & E_{0,1}^{(1)} & 0 & 0 & 0 & cdots \
E_{1,0}^{(1)} & E_{1,1}^{(1)}+E_{0,0}^{(2)} & E_{0,1}^{(2)} & 0 & 0 & cdots \
0 & E_{1,0}^{(2)} & E_{1,1}^{(2)}+E_{0,0}^{(3)} & E_{0,1}^{(3)} & 0 & cdots \
0 & 0 & E_{1,0}^{(3)} & E_{1,1}^{(3)}+E_{0,0}^{(4)} & E_{0,1}^{(4)} & cdots \
cdots & cdots & cdots & cdots & cdots & cdots end{bmatrix}
begin{bmatrix} T_1 \ T_2 \ T_3 \ T_4 \ T_5 \ cdots end{bmatrix} =
begin{bmatrix} F_1 \ F_2 \ F_3 \ F_4 \ F_5 \ cdots end{bmatrix}
$$
with the boundary conditions properly imposed.
The original problem - with $x$ and $u$ instead of $xi$ and $T$ - is recovered by employing the following transformations.
Herewith: $xi_k ;rightarrow; x_k$ and $T_k ;rightarrow; u_k$ :
$$
x = (b-a)xi+a quad Longrightarrow quad
begin{cases} x = a ;leftrightarrow; xi = 0 \ x = b ;leftrightarrow; xi = 1 end{cases}
\
u = (beta-alpha)T+alpha quad Longrightarrow quad
begin{cases} u = alpha ;leftrightarrow; T = 0 \ u = beta ;leftrightarrow; T = 1 end{cases}
$$
Note. Variational formulation and Galerkin method are the same in this case.
answered Jan 12 at 20:56


Han de BruijnHan de Bruijn
12.2k22361
12.2k22361
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3069174%2fvariational-formulation-of-second-order-differential-equation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Why do you want to get rid of this integral?
$endgroup$
– gerw
Jan 11 at 8:18
$begingroup$
I assumed, this stands in the way of solving the problem. But the left side can be written as $a^*(u_0,v)$ with $a^*$ being a bilinear form.
$endgroup$
– dba
Jan 11 at 14:12