How does one approximate a second derivative with ATPS interpolation












0












$begingroup$


When using the Dual Reciprocity Boundary Element Method ( or any radial basis function method ) to solve a nonlinear differential equation it is necessary to approximate some derivatives of a potential field using radial basis functions.



If you have a potential field $mathbf{u}_i$ with coordinates $mathbf{x}_i=(x_i,y_i)$. The field can be approximated with radial basis functions such that $mathbf{u}=mathbf{F}mathbf{alpha}$. Where $mathbf{F}$ is a matrix of radial basis functions based on the following:
$$ f(mathbf{x_i})=Sigma_{j=1}^{N}alpha_j]phi(||mathbf{x_i}-mathbf{x_j}||)_2$$
$$ mathbf{F}=left[
begin{matrix}
phi(||mathbf{x_1}-mathbf{x_1}||)_2 & cdots & phi(||mathbf{x_1}-mathbf{x_N}||)_2 \
vdots & ddots & vdots \
phi(||mathbf{x_N}-mathbf{x_1}||)_2 & cdots & phi(||mathbf{x_N}-mathbf{x_N}||)_2 \
end{matrix}right]
$$

$$ mathbf{alpha}=left[begin{matrix}
mathbf{alpha_1} \
vdots \
mathbf{alpha_N} \
end{matrix}right]$$



For ATPS (Augmented Thin Plate Splines) the matrix of radial basis functions becomes:
$$ mathbf{x_i}=(x_i,y_i) $$
$$ r=sqrt{ (x_i-x_j)^2 + (y_i+y_j)^2 }=||mathbf{x_i}-mathbf{x_j}|| $$
$$f(mathbf{x_i})=Sigma_{j=1}^{N}alpha_j r^2 log(r) + beta_1+beta_2x_j+beta_3y_j $$



$$Sigma_{j=1}^N alpha_j=Sigma_{j=1}^N alpha_j x_j=Sigma_{j=1}^N alpha_j y_j=0
$$



$$ mathbf{P}= left[
begin{matrix}
1 & 1 & cdots & 1 \
x_1 & x_2 & cdots & x_N \
y_1 & y_2 & cdots & y_N \
end{matrix} right]
$$

$$ mathbf{F^*}= left[
begin{matrix}
mathbf{F} && mathbf{P^T} \
mathbf{P} && mathbf{0} \
end{matrix} right]
$$



$$ left[
begin{matrix}
mathbf{u} \
mathbf{0} \
end{matrix} right]= left[
begin{matrix}
mathbf{F} && mathbf{P^T} \
mathbf{P} && mathbf{0} \
end{matrix} right]left[
begin{matrix}
mathbf{alpha} \
mathbf{beta} \
end{matrix} right]=mathbf{F^*alpha^*}
$$



As $alpha$ and $beta$ are constants, a spatial derivative of $mathbf{u}$ becomes a spatial derivative of the radial basis function series. Thus:



$$ frac{partialmathbf{u}}{partial x}=frac{partialmathbf{F^*}}{partial x}mathbf{alpha^*}=frac{partialmathbf{F^*}}{partial x}mathbf{{F^*}^{-1}u}
$$



similarly the second derivative would be
$$ frac{partial^2mathbf{u}}{partial x^2}=frac{partial^2mathbf{F^*}}{partial x^2}mathbf{{F^*}^{-1}u}
$$



However when taking the second derivative of the ATPS function I obtain:
$$frac{partial^2f(mathbf{x_i})}{partial x^2}=Sigma_{j=1}^{N}alpha_j left[2log(r)+frac{(y_i-y_j)^2}{r^2}+1 right] $$



If you then take the limit of this as $mathbf{x_i}$ approaches $mathbf{x_j}$ you obtain $-infty$ for the diagonals of the matrix $frac{partial^2mathbf{F^*}}{partial x^2}$.



Thus i do not understand how to obtain a second derivative approximation of a potential field u using Augmented Thin Plate Splines.










share|cite|improve this question











$endgroup$












  • $begingroup$
    I have an answer for you, but also a couple of questions. What is this called augmented TPS, not just TPS?
    $endgroup$
    – rych
    Jan 27 at 13:12










  • $begingroup$
    Now, something that will hopefully lead us to a solution. Do you know the value of $r^2 log r$ at $r=0$?
    $endgroup$
    – rych
    Jan 27 at 13:14










  • $begingroup$
    It is augmented TPS because of the additional three polynomial terms after the $r^2log(r)$. The value of at $r=0$ you can take the limit as r goes to zero. $lim_{r to 0}[r^2log(r)]=lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=frac{-infty}{infty}$ then use L'hopital's rule to say that because the limit is $frac{infty}{infty}$ you take the limit of the derivative. $lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}frac{partial}{partial r}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}[frac{frac{1}{r}}{frac{-2}{r^3}}]=lim_{r to 0}[frac{-r^2}{2}]=0$
    $endgroup$
    – N. Morgan
    Jan 28 at 2:12












  • $begingroup$
    The first spatial derivative of the matrix F also has zero diagonals because of the same limit as $r to 0$. It is only the second spatial derivative that goes to $-infty$.
    $endgroup$
    – N. Morgan
    Jan 28 at 2:17
















0












$begingroup$


When using the Dual Reciprocity Boundary Element Method ( or any radial basis function method ) to solve a nonlinear differential equation it is necessary to approximate some derivatives of a potential field using radial basis functions.



If you have a potential field $mathbf{u}_i$ with coordinates $mathbf{x}_i=(x_i,y_i)$. The field can be approximated with radial basis functions such that $mathbf{u}=mathbf{F}mathbf{alpha}$. Where $mathbf{F}$ is a matrix of radial basis functions based on the following:
$$ f(mathbf{x_i})=Sigma_{j=1}^{N}alpha_j]phi(||mathbf{x_i}-mathbf{x_j}||)_2$$
$$ mathbf{F}=left[
begin{matrix}
phi(||mathbf{x_1}-mathbf{x_1}||)_2 & cdots & phi(||mathbf{x_1}-mathbf{x_N}||)_2 \
vdots & ddots & vdots \
phi(||mathbf{x_N}-mathbf{x_1}||)_2 & cdots & phi(||mathbf{x_N}-mathbf{x_N}||)_2 \
end{matrix}right]
$$

$$ mathbf{alpha}=left[begin{matrix}
mathbf{alpha_1} \
vdots \
mathbf{alpha_N} \
end{matrix}right]$$



For ATPS (Augmented Thin Plate Splines) the matrix of radial basis functions becomes:
$$ mathbf{x_i}=(x_i,y_i) $$
$$ r=sqrt{ (x_i-x_j)^2 + (y_i+y_j)^2 }=||mathbf{x_i}-mathbf{x_j}|| $$
$$f(mathbf{x_i})=Sigma_{j=1}^{N}alpha_j r^2 log(r) + beta_1+beta_2x_j+beta_3y_j $$



$$Sigma_{j=1}^N alpha_j=Sigma_{j=1}^N alpha_j x_j=Sigma_{j=1}^N alpha_j y_j=0
$$



$$ mathbf{P}= left[
begin{matrix}
1 & 1 & cdots & 1 \
x_1 & x_2 & cdots & x_N \
y_1 & y_2 & cdots & y_N \
end{matrix} right]
$$

$$ mathbf{F^*}= left[
begin{matrix}
mathbf{F} && mathbf{P^T} \
mathbf{P} && mathbf{0} \
end{matrix} right]
$$



$$ left[
begin{matrix}
mathbf{u} \
mathbf{0} \
end{matrix} right]= left[
begin{matrix}
mathbf{F} && mathbf{P^T} \
mathbf{P} && mathbf{0} \
end{matrix} right]left[
begin{matrix}
mathbf{alpha} \
mathbf{beta} \
end{matrix} right]=mathbf{F^*alpha^*}
$$



As $alpha$ and $beta$ are constants, a spatial derivative of $mathbf{u}$ becomes a spatial derivative of the radial basis function series. Thus:



$$ frac{partialmathbf{u}}{partial x}=frac{partialmathbf{F^*}}{partial x}mathbf{alpha^*}=frac{partialmathbf{F^*}}{partial x}mathbf{{F^*}^{-1}u}
$$



similarly the second derivative would be
$$ frac{partial^2mathbf{u}}{partial x^2}=frac{partial^2mathbf{F^*}}{partial x^2}mathbf{{F^*}^{-1}u}
$$



However when taking the second derivative of the ATPS function I obtain:
$$frac{partial^2f(mathbf{x_i})}{partial x^2}=Sigma_{j=1}^{N}alpha_j left[2log(r)+frac{(y_i-y_j)^2}{r^2}+1 right] $$



If you then take the limit of this as $mathbf{x_i}$ approaches $mathbf{x_j}$ you obtain $-infty$ for the diagonals of the matrix $frac{partial^2mathbf{F^*}}{partial x^2}$.



Thus i do not understand how to obtain a second derivative approximation of a potential field u using Augmented Thin Plate Splines.










share|cite|improve this question











$endgroup$












  • $begingroup$
    I have an answer for you, but also a couple of questions. What is this called augmented TPS, not just TPS?
    $endgroup$
    – rych
    Jan 27 at 13:12










  • $begingroup$
    Now, something that will hopefully lead us to a solution. Do you know the value of $r^2 log r$ at $r=0$?
    $endgroup$
    – rych
    Jan 27 at 13:14










  • $begingroup$
    It is augmented TPS because of the additional three polynomial terms after the $r^2log(r)$. The value of at $r=0$ you can take the limit as r goes to zero. $lim_{r to 0}[r^2log(r)]=lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=frac{-infty}{infty}$ then use L'hopital's rule to say that because the limit is $frac{infty}{infty}$ you take the limit of the derivative. $lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}frac{partial}{partial r}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}[frac{frac{1}{r}}{frac{-2}{r^3}}]=lim_{r to 0}[frac{-r^2}{2}]=0$
    $endgroup$
    – N. Morgan
    Jan 28 at 2:12












  • $begingroup$
    The first spatial derivative of the matrix F also has zero diagonals because of the same limit as $r to 0$. It is only the second spatial derivative that goes to $-infty$.
    $endgroup$
    – N. Morgan
    Jan 28 at 2:17














0












0








0


1



$begingroup$


When using the Dual Reciprocity Boundary Element Method ( or any radial basis function method ) to solve a nonlinear differential equation it is necessary to approximate some derivatives of a potential field using radial basis functions.



If you have a potential field $mathbf{u}_i$ with coordinates $mathbf{x}_i=(x_i,y_i)$. The field can be approximated with radial basis functions such that $mathbf{u}=mathbf{F}mathbf{alpha}$. Where $mathbf{F}$ is a matrix of radial basis functions based on the following:
$$ f(mathbf{x_i})=Sigma_{j=1}^{N}alpha_j]phi(||mathbf{x_i}-mathbf{x_j}||)_2$$
$$ mathbf{F}=left[
begin{matrix}
phi(||mathbf{x_1}-mathbf{x_1}||)_2 & cdots & phi(||mathbf{x_1}-mathbf{x_N}||)_2 \
vdots & ddots & vdots \
phi(||mathbf{x_N}-mathbf{x_1}||)_2 & cdots & phi(||mathbf{x_N}-mathbf{x_N}||)_2 \
end{matrix}right]
$$

$$ mathbf{alpha}=left[begin{matrix}
mathbf{alpha_1} \
vdots \
mathbf{alpha_N} \
end{matrix}right]$$



For ATPS (Augmented Thin Plate Splines) the matrix of radial basis functions becomes:
$$ mathbf{x_i}=(x_i,y_i) $$
$$ r=sqrt{ (x_i-x_j)^2 + (y_i+y_j)^2 }=||mathbf{x_i}-mathbf{x_j}|| $$
$$f(mathbf{x_i})=Sigma_{j=1}^{N}alpha_j r^2 log(r) + beta_1+beta_2x_j+beta_3y_j $$



$$Sigma_{j=1}^N alpha_j=Sigma_{j=1}^N alpha_j x_j=Sigma_{j=1}^N alpha_j y_j=0
$$



$$ mathbf{P}= left[
begin{matrix}
1 & 1 & cdots & 1 \
x_1 & x_2 & cdots & x_N \
y_1 & y_2 & cdots & y_N \
end{matrix} right]
$$

$$ mathbf{F^*}= left[
begin{matrix}
mathbf{F} && mathbf{P^T} \
mathbf{P} && mathbf{0} \
end{matrix} right]
$$



$$ left[
begin{matrix}
mathbf{u} \
mathbf{0} \
end{matrix} right]= left[
begin{matrix}
mathbf{F} && mathbf{P^T} \
mathbf{P} && mathbf{0} \
end{matrix} right]left[
begin{matrix}
mathbf{alpha} \
mathbf{beta} \
end{matrix} right]=mathbf{F^*alpha^*}
$$



As $alpha$ and $beta$ are constants, a spatial derivative of $mathbf{u}$ becomes a spatial derivative of the radial basis function series. Thus:



$$ frac{partialmathbf{u}}{partial x}=frac{partialmathbf{F^*}}{partial x}mathbf{alpha^*}=frac{partialmathbf{F^*}}{partial x}mathbf{{F^*}^{-1}u}
$$



similarly the second derivative would be
$$ frac{partial^2mathbf{u}}{partial x^2}=frac{partial^2mathbf{F^*}}{partial x^2}mathbf{{F^*}^{-1}u}
$$



However when taking the second derivative of the ATPS function I obtain:
$$frac{partial^2f(mathbf{x_i})}{partial x^2}=Sigma_{j=1}^{N}alpha_j left[2log(r)+frac{(y_i-y_j)^2}{r^2}+1 right] $$



If you then take the limit of this as $mathbf{x_i}$ approaches $mathbf{x_j}$ you obtain $-infty$ for the diagonals of the matrix $frac{partial^2mathbf{F^*}}{partial x^2}$.



Thus i do not understand how to obtain a second derivative approximation of a potential field u using Augmented Thin Plate Splines.










share|cite|improve this question











$endgroup$




When using the Dual Reciprocity Boundary Element Method ( or any radial basis function method ) to solve a nonlinear differential equation it is necessary to approximate some derivatives of a potential field using radial basis functions.



If you have a potential field $mathbf{u}_i$ with coordinates $mathbf{x}_i=(x_i,y_i)$. The field can be approximated with radial basis functions such that $mathbf{u}=mathbf{F}mathbf{alpha}$. Where $mathbf{F}$ is a matrix of radial basis functions based on the following:
$$ f(mathbf{x_i})=Sigma_{j=1}^{N}alpha_j]phi(||mathbf{x_i}-mathbf{x_j}||)_2$$
$$ mathbf{F}=left[
begin{matrix}
phi(||mathbf{x_1}-mathbf{x_1}||)_2 & cdots & phi(||mathbf{x_1}-mathbf{x_N}||)_2 \
vdots & ddots & vdots \
phi(||mathbf{x_N}-mathbf{x_1}||)_2 & cdots & phi(||mathbf{x_N}-mathbf{x_N}||)_2 \
end{matrix}right]
$$

$$ mathbf{alpha}=left[begin{matrix}
mathbf{alpha_1} \
vdots \
mathbf{alpha_N} \
end{matrix}right]$$



For ATPS (Augmented Thin Plate Splines) the matrix of radial basis functions becomes:
$$ mathbf{x_i}=(x_i,y_i) $$
$$ r=sqrt{ (x_i-x_j)^2 + (y_i+y_j)^2 }=||mathbf{x_i}-mathbf{x_j}|| $$
$$f(mathbf{x_i})=Sigma_{j=1}^{N}alpha_j r^2 log(r) + beta_1+beta_2x_j+beta_3y_j $$



$$Sigma_{j=1}^N alpha_j=Sigma_{j=1}^N alpha_j x_j=Sigma_{j=1}^N alpha_j y_j=0
$$



$$ mathbf{P}= left[
begin{matrix}
1 & 1 & cdots & 1 \
x_1 & x_2 & cdots & x_N \
y_1 & y_2 & cdots & y_N \
end{matrix} right]
$$

$$ mathbf{F^*}= left[
begin{matrix}
mathbf{F} && mathbf{P^T} \
mathbf{P} && mathbf{0} \
end{matrix} right]
$$



$$ left[
begin{matrix}
mathbf{u} \
mathbf{0} \
end{matrix} right]= left[
begin{matrix}
mathbf{F} && mathbf{P^T} \
mathbf{P} && mathbf{0} \
end{matrix} right]left[
begin{matrix}
mathbf{alpha} \
mathbf{beta} \
end{matrix} right]=mathbf{F^*alpha^*}
$$



As $alpha$ and $beta$ are constants, a spatial derivative of $mathbf{u}$ becomes a spatial derivative of the radial basis function series. Thus:



$$ frac{partialmathbf{u}}{partial x}=frac{partialmathbf{F^*}}{partial x}mathbf{alpha^*}=frac{partialmathbf{F^*}}{partial x}mathbf{{F^*}^{-1}u}
$$



similarly the second derivative would be
$$ frac{partial^2mathbf{u}}{partial x^2}=frac{partial^2mathbf{F^*}}{partial x^2}mathbf{{F^*}^{-1}u}
$$



However when taking the second derivative of the ATPS function I obtain:
$$frac{partial^2f(mathbf{x_i})}{partial x^2}=Sigma_{j=1}^{N}alpha_j left[2log(r)+frac{(y_i-y_j)^2}{r^2}+1 right] $$



If you then take the limit of this as $mathbf{x_i}$ approaches $mathbf{x_j}$ you obtain $-infty$ for the diagonals of the matrix $frac{partial^2mathbf{F^*}}{partial x^2}$.



Thus i do not understand how to obtain a second derivative approximation of a potential field u using Augmented Thin Plate Splines.







numerical-methods partial-derivative interpolation rbf






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 29 at 1:04







N. Morgan

















asked Jan 26 at 3:02









N. MorganN. Morgan

208




208












  • $begingroup$
    I have an answer for you, but also a couple of questions. What is this called augmented TPS, not just TPS?
    $endgroup$
    – rych
    Jan 27 at 13:12










  • $begingroup$
    Now, something that will hopefully lead us to a solution. Do you know the value of $r^2 log r$ at $r=0$?
    $endgroup$
    – rych
    Jan 27 at 13:14










  • $begingroup$
    It is augmented TPS because of the additional three polynomial terms after the $r^2log(r)$. The value of at $r=0$ you can take the limit as r goes to zero. $lim_{r to 0}[r^2log(r)]=lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=frac{-infty}{infty}$ then use L'hopital's rule to say that because the limit is $frac{infty}{infty}$ you take the limit of the derivative. $lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}frac{partial}{partial r}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}[frac{frac{1}{r}}{frac{-2}{r^3}}]=lim_{r to 0}[frac{-r^2}{2}]=0$
    $endgroup$
    – N. Morgan
    Jan 28 at 2:12












  • $begingroup$
    The first spatial derivative of the matrix F also has zero diagonals because of the same limit as $r to 0$. It is only the second spatial derivative that goes to $-infty$.
    $endgroup$
    – N. Morgan
    Jan 28 at 2:17


















  • $begingroup$
    I have an answer for you, but also a couple of questions. What is this called augmented TPS, not just TPS?
    $endgroup$
    – rych
    Jan 27 at 13:12










  • $begingroup$
    Now, something that will hopefully lead us to a solution. Do you know the value of $r^2 log r$ at $r=0$?
    $endgroup$
    – rych
    Jan 27 at 13:14










  • $begingroup$
    It is augmented TPS because of the additional three polynomial terms after the $r^2log(r)$. The value of at $r=0$ you can take the limit as r goes to zero. $lim_{r to 0}[r^2log(r)]=lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=frac{-infty}{infty}$ then use L'hopital's rule to say that because the limit is $frac{infty}{infty}$ you take the limit of the derivative. $lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}frac{partial}{partial r}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}[frac{frac{1}{r}}{frac{-2}{r^3}}]=lim_{r to 0}[frac{-r^2}{2}]=0$
    $endgroup$
    – N. Morgan
    Jan 28 at 2:12












  • $begingroup$
    The first spatial derivative of the matrix F also has zero diagonals because of the same limit as $r to 0$. It is only the second spatial derivative that goes to $-infty$.
    $endgroup$
    – N. Morgan
    Jan 28 at 2:17
















$begingroup$
I have an answer for you, but also a couple of questions. What is this called augmented TPS, not just TPS?
$endgroup$
– rych
Jan 27 at 13:12




$begingroup$
I have an answer for you, but also a couple of questions. What is this called augmented TPS, not just TPS?
$endgroup$
– rych
Jan 27 at 13:12












$begingroup$
Now, something that will hopefully lead us to a solution. Do you know the value of $r^2 log r$ at $r=0$?
$endgroup$
– rych
Jan 27 at 13:14




$begingroup$
Now, something that will hopefully lead us to a solution. Do you know the value of $r^2 log r$ at $r=0$?
$endgroup$
– rych
Jan 27 at 13:14












$begingroup$
It is augmented TPS because of the additional three polynomial terms after the $r^2log(r)$. The value of at $r=0$ you can take the limit as r goes to zero. $lim_{r to 0}[r^2log(r)]=lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=frac{-infty}{infty}$ then use L'hopital's rule to say that because the limit is $frac{infty}{infty}$ you take the limit of the derivative. $lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}frac{partial}{partial r}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}[frac{frac{1}{r}}{frac{-2}{r^3}}]=lim_{r to 0}[frac{-r^2}{2}]=0$
$endgroup$
– N. Morgan
Jan 28 at 2:12






$begingroup$
It is augmented TPS because of the additional three polynomial terms after the $r^2log(r)$. The value of at $r=0$ you can take the limit as r goes to zero. $lim_{r to 0}[r^2log(r)]=lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=frac{-infty}{infty}$ then use L'hopital's rule to say that because the limit is $frac{infty}{infty}$ you take the limit of the derivative. $lim_{r to 0}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}frac{partial}{partial r}[frac{log(r)}{frac{1}{r^2}}]=lim_{r to 0}[frac{frac{1}{r}}{frac{-2}{r^3}}]=lim_{r to 0}[frac{-r^2}{2}]=0$
$endgroup$
– N. Morgan
Jan 28 at 2:12














$begingroup$
The first spatial derivative of the matrix F also has zero diagonals because of the same limit as $r to 0$. It is only the second spatial derivative that goes to $-infty$.
$endgroup$
– N. Morgan
Jan 28 at 2:17




$begingroup$
The first spatial derivative of the matrix F also has zero diagonals because of the same limit as $r to 0$. It is only the second spatial derivative that goes to $-infty$.
$endgroup$
– N. Morgan
Jan 28 at 2:17










2 Answers
2






active

oldest

votes


















1












$begingroup$

It is worth remembering that approximation theory is a branch of numerical analysis. In numerical analysis when you're dealing with negligible terms, you don't look closely into how smooth those terms are: once they are sufficiently close to zero they are zero.



As you program TPS evaluation and want to avoid a runtime error you shouldn't attempt evaluating $r^2 log r$ at $r=0$: although the limit value is zero, part of that expression is still $log r$ which is simply undefined at $0$.



Instead, TPS is programmed as a piecewise function $cases{r^2log r, r>varepsilon \0}$.



So what you're trying to calculate is the second derivative of a constant $0$. It is zero.



Numerics aside, in pure analysis, we are not even allowed to differentiate a functions where its undefined; and if you make limit a part of definition, then you won't be allowed to just swap differentiation and limit at the singularity.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    How large should epsilon be? Should it be sized to only reduce the diagonal terms to zero? or should it potentially zero other parts of the matrix?
    $endgroup$
    – N. Morgan
    Jan 29 at 0:59










  • $begingroup$
    On a related note but not strictly following the original question... I sometimes observe a Runge's phenomenon effect on the edges of the area being interpolated and approximated. This effect tends to be more exaggerated in the derivatives. I added the polynomial smoother to the TPS (making it ATPS) in the hopes of addressing this overfit behavior. Do know any other methods to improve the edge predictions for the derivatives? (I am looking into using hermite interpolation instead for what has been labeled "mass conservative interpolation of velocity derivatives" in some papers/books)
    $endgroup$
    – N. Morgan
    Jan 29 at 1:03










  • $begingroup$
    Well, theoretically epsilon should be zero: you're removing the singularity at one single point $r=0$. But in practice, take $varepsilon$ a smallest floating point number, for example. Actually, as an experiment, in your computer program, what happens if you try to evaluate $rlog r$ at $0$?
    $endgroup$
    – rych
    Jan 29 at 6:55










  • $begingroup$
    You cannot use TPS for data approximation without the linear polynomial in front, of course. See for example Table 3.1. in Armin Iske 2004 Multiresolution Methods in Scattered Data Modelling.
    $endgroup$
    – rych
    Jan 29 at 7:00










  • $begingroup$
    In the interpolation problem with RBF, such as TPS you have here, there are no "edges". However, for TPS, it's recommended to scale your data into a $[-0.5,0.5]^2$ interval first.
    $endgroup$
    – rych
    Jan 29 at 7:04





















1












$begingroup$

A "classic TPS spline $(sigma)$ is a continuously differentiable function $sigma in C^1(mathbb{R}^2)$". If you need to approximate second derivatives of the function then you have to use a twice continuously differentiable version of the Duchon spline.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Could you add a formula, or a reference link to a "twice continuously differentiable version of the Duchon spline" please?
    $endgroup$
    – rych
    Feb 12 at 6:17










  • $begingroup$
    Sure. Here we are: R. Arcangéli, Multidimensional Minimizing Splines, Springer, 2004 A. Bezhaev, V. Vasilenko, Variational Theory of Splines, Springer, 2001 J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
    $endgroup$
    – NetTvor
    Feb 12 at 8:42










  • $begingroup$
    Also simple theory of something close to the Duchon splines could be found at normalsplines.blogspot.com/2019/01/…
    $endgroup$
    – NetTvor
    Feb 12 at 8:53










  • $begingroup$
    One month later and I've started reading the references you posted. Спасибо! A good chance to improve my approximation theory education. For now I'm a bit confused: say we reconstruct surface from a point cloud using TPS: isn't it smooth and curvature continuous despite TPS being only once differentiable at the centers? What's your opinion of my answer above?
    $endgroup$
    – rych
    Mar 13 at 10:16











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3087859%2fhow-does-one-approximate-a-second-derivative-with-atps-interpolation%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

It is worth remembering that approximation theory is a branch of numerical analysis. In numerical analysis when you're dealing with negligible terms, you don't look closely into how smooth those terms are: once they are sufficiently close to zero they are zero.



As you program TPS evaluation and want to avoid a runtime error you shouldn't attempt evaluating $r^2 log r$ at $r=0$: although the limit value is zero, part of that expression is still $log r$ which is simply undefined at $0$.



Instead, TPS is programmed as a piecewise function $cases{r^2log r, r>varepsilon \0}$.



So what you're trying to calculate is the second derivative of a constant $0$. It is zero.



Numerics aside, in pure analysis, we are not even allowed to differentiate a functions where its undefined; and if you make limit a part of definition, then you won't be allowed to just swap differentiation and limit at the singularity.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    How large should epsilon be? Should it be sized to only reduce the diagonal terms to zero? or should it potentially zero other parts of the matrix?
    $endgroup$
    – N. Morgan
    Jan 29 at 0:59










  • $begingroup$
    On a related note but not strictly following the original question... I sometimes observe a Runge's phenomenon effect on the edges of the area being interpolated and approximated. This effect tends to be more exaggerated in the derivatives. I added the polynomial smoother to the TPS (making it ATPS) in the hopes of addressing this overfit behavior. Do know any other methods to improve the edge predictions for the derivatives? (I am looking into using hermite interpolation instead for what has been labeled "mass conservative interpolation of velocity derivatives" in some papers/books)
    $endgroup$
    – N. Morgan
    Jan 29 at 1:03










  • $begingroup$
    Well, theoretically epsilon should be zero: you're removing the singularity at one single point $r=0$. But in practice, take $varepsilon$ a smallest floating point number, for example. Actually, as an experiment, in your computer program, what happens if you try to evaluate $rlog r$ at $0$?
    $endgroup$
    – rych
    Jan 29 at 6:55










  • $begingroup$
    You cannot use TPS for data approximation without the linear polynomial in front, of course. See for example Table 3.1. in Armin Iske 2004 Multiresolution Methods in Scattered Data Modelling.
    $endgroup$
    – rych
    Jan 29 at 7:00










  • $begingroup$
    In the interpolation problem with RBF, such as TPS you have here, there are no "edges". However, for TPS, it's recommended to scale your data into a $[-0.5,0.5]^2$ interval first.
    $endgroup$
    – rych
    Jan 29 at 7:04


















1












$begingroup$

It is worth remembering that approximation theory is a branch of numerical analysis. In numerical analysis when you're dealing with negligible terms, you don't look closely into how smooth those terms are: once they are sufficiently close to zero they are zero.



As you program TPS evaluation and want to avoid a runtime error you shouldn't attempt evaluating $r^2 log r$ at $r=0$: although the limit value is zero, part of that expression is still $log r$ which is simply undefined at $0$.



Instead, TPS is programmed as a piecewise function $cases{r^2log r, r>varepsilon \0}$.



So what you're trying to calculate is the second derivative of a constant $0$. It is zero.



Numerics aside, in pure analysis, we are not even allowed to differentiate a functions where its undefined; and if you make limit a part of definition, then you won't be allowed to just swap differentiation and limit at the singularity.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    How large should epsilon be? Should it be sized to only reduce the diagonal terms to zero? or should it potentially zero other parts of the matrix?
    $endgroup$
    – N. Morgan
    Jan 29 at 0:59










  • $begingroup$
    On a related note but not strictly following the original question... I sometimes observe a Runge's phenomenon effect on the edges of the area being interpolated and approximated. This effect tends to be more exaggerated in the derivatives. I added the polynomial smoother to the TPS (making it ATPS) in the hopes of addressing this overfit behavior. Do know any other methods to improve the edge predictions for the derivatives? (I am looking into using hermite interpolation instead for what has been labeled "mass conservative interpolation of velocity derivatives" in some papers/books)
    $endgroup$
    – N. Morgan
    Jan 29 at 1:03










  • $begingroup$
    Well, theoretically epsilon should be zero: you're removing the singularity at one single point $r=0$. But in practice, take $varepsilon$ a smallest floating point number, for example. Actually, as an experiment, in your computer program, what happens if you try to evaluate $rlog r$ at $0$?
    $endgroup$
    – rych
    Jan 29 at 6:55










  • $begingroup$
    You cannot use TPS for data approximation without the linear polynomial in front, of course. See for example Table 3.1. in Armin Iske 2004 Multiresolution Methods in Scattered Data Modelling.
    $endgroup$
    – rych
    Jan 29 at 7:00










  • $begingroup$
    In the interpolation problem with RBF, such as TPS you have here, there are no "edges". However, for TPS, it's recommended to scale your data into a $[-0.5,0.5]^2$ interval first.
    $endgroup$
    – rych
    Jan 29 at 7:04
















1












1








1





$begingroup$

It is worth remembering that approximation theory is a branch of numerical analysis. In numerical analysis when you're dealing with negligible terms, you don't look closely into how smooth those terms are: once they are sufficiently close to zero they are zero.



As you program TPS evaluation and want to avoid a runtime error you shouldn't attempt evaluating $r^2 log r$ at $r=0$: although the limit value is zero, part of that expression is still $log r$ which is simply undefined at $0$.



Instead, TPS is programmed as a piecewise function $cases{r^2log r, r>varepsilon \0}$.



So what you're trying to calculate is the second derivative of a constant $0$. It is zero.



Numerics aside, in pure analysis, we are not even allowed to differentiate a functions where its undefined; and if you make limit a part of definition, then you won't be allowed to just swap differentiation and limit at the singularity.






share|cite|improve this answer











$endgroup$



It is worth remembering that approximation theory is a branch of numerical analysis. In numerical analysis when you're dealing with negligible terms, you don't look closely into how smooth those terms are: once they are sufficiently close to zero they are zero.



As you program TPS evaluation and want to avoid a runtime error you shouldn't attempt evaluating $r^2 log r$ at $r=0$: although the limit value is zero, part of that expression is still $log r$ which is simply undefined at $0$.



Instead, TPS is programmed as a piecewise function $cases{r^2log r, r>varepsilon \0}$.



So what you're trying to calculate is the second derivative of a constant $0$. It is zero.



Numerics aside, in pure analysis, we are not even allowed to differentiate a functions where its undefined; and if you make limit a part of definition, then you won't be allowed to just swap differentiation and limit at the singularity.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 28 at 13:25

























answered Jan 28 at 11:43









rychrych

2,5061718




2,5061718












  • $begingroup$
    How large should epsilon be? Should it be sized to only reduce the diagonal terms to zero? or should it potentially zero other parts of the matrix?
    $endgroup$
    – N. Morgan
    Jan 29 at 0:59










  • $begingroup$
    On a related note but not strictly following the original question... I sometimes observe a Runge's phenomenon effect on the edges of the area being interpolated and approximated. This effect tends to be more exaggerated in the derivatives. I added the polynomial smoother to the TPS (making it ATPS) in the hopes of addressing this overfit behavior. Do know any other methods to improve the edge predictions for the derivatives? (I am looking into using hermite interpolation instead for what has been labeled "mass conservative interpolation of velocity derivatives" in some papers/books)
    $endgroup$
    – N. Morgan
    Jan 29 at 1:03










  • $begingroup$
    Well, theoretically epsilon should be zero: you're removing the singularity at one single point $r=0$. But in practice, take $varepsilon$ a smallest floating point number, for example. Actually, as an experiment, in your computer program, what happens if you try to evaluate $rlog r$ at $0$?
    $endgroup$
    – rych
    Jan 29 at 6:55










  • $begingroup$
    You cannot use TPS for data approximation without the linear polynomial in front, of course. See for example Table 3.1. in Armin Iske 2004 Multiresolution Methods in Scattered Data Modelling.
    $endgroup$
    – rych
    Jan 29 at 7:00










  • $begingroup$
    In the interpolation problem with RBF, such as TPS you have here, there are no "edges". However, for TPS, it's recommended to scale your data into a $[-0.5,0.5]^2$ interval first.
    $endgroup$
    – rych
    Jan 29 at 7:04




















  • $begingroup$
    How large should epsilon be? Should it be sized to only reduce the diagonal terms to zero? or should it potentially zero other parts of the matrix?
    $endgroup$
    – N. Morgan
    Jan 29 at 0:59










  • $begingroup$
    On a related note but not strictly following the original question... I sometimes observe a Runge's phenomenon effect on the edges of the area being interpolated and approximated. This effect tends to be more exaggerated in the derivatives. I added the polynomial smoother to the TPS (making it ATPS) in the hopes of addressing this overfit behavior. Do know any other methods to improve the edge predictions for the derivatives? (I am looking into using hermite interpolation instead for what has been labeled "mass conservative interpolation of velocity derivatives" in some papers/books)
    $endgroup$
    – N. Morgan
    Jan 29 at 1:03










  • $begingroup$
    Well, theoretically epsilon should be zero: you're removing the singularity at one single point $r=0$. But in practice, take $varepsilon$ a smallest floating point number, for example. Actually, as an experiment, in your computer program, what happens if you try to evaluate $rlog r$ at $0$?
    $endgroup$
    – rych
    Jan 29 at 6:55










  • $begingroup$
    You cannot use TPS for data approximation without the linear polynomial in front, of course. See for example Table 3.1. in Armin Iske 2004 Multiresolution Methods in Scattered Data Modelling.
    $endgroup$
    – rych
    Jan 29 at 7:00










  • $begingroup$
    In the interpolation problem with RBF, such as TPS you have here, there are no "edges". However, for TPS, it's recommended to scale your data into a $[-0.5,0.5]^2$ interval first.
    $endgroup$
    – rych
    Jan 29 at 7:04


















$begingroup$
How large should epsilon be? Should it be sized to only reduce the diagonal terms to zero? or should it potentially zero other parts of the matrix?
$endgroup$
– N. Morgan
Jan 29 at 0:59




$begingroup$
How large should epsilon be? Should it be sized to only reduce the diagonal terms to zero? or should it potentially zero other parts of the matrix?
$endgroup$
– N. Morgan
Jan 29 at 0:59












$begingroup$
On a related note but not strictly following the original question... I sometimes observe a Runge's phenomenon effect on the edges of the area being interpolated and approximated. This effect tends to be more exaggerated in the derivatives. I added the polynomial smoother to the TPS (making it ATPS) in the hopes of addressing this overfit behavior. Do know any other methods to improve the edge predictions for the derivatives? (I am looking into using hermite interpolation instead for what has been labeled "mass conservative interpolation of velocity derivatives" in some papers/books)
$endgroup$
– N. Morgan
Jan 29 at 1:03




$begingroup$
On a related note but not strictly following the original question... I sometimes observe a Runge's phenomenon effect on the edges of the area being interpolated and approximated. This effect tends to be more exaggerated in the derivatives. I added the polynomial smoother to the TPS (making it ATPS) in the hopes of addressing this overfit behavior. Do know any other methods to improve the edge predictions for the derivatives? (I am looking into using hermite interpolation instead for what has been labeled "mass conservative interpolation of velocity derivatives" in some papers/books)
$endgroup$
– N. Morgan
Jan 29 at 1:03












$begingroup$
Well, theoretically epsilon should be zero: you're removing the singularity at one single point $r=0$. But in practice, take $varepsilon$ a smallest floating point number, for example. Actually, as an experiment, in your computer program, what happens if you try to evaluate $rlog r$ at $0$?
$endgroup$
– rych
Jan 29 at 6:55




$begingroup$
Well, theoretically epsilon should be zero: you're removing the singularity at one single point $r=0$. But in practice, take $varepsilon$ a smallest floating point number, for example. Actually, as an experiment, in your computer program, what happens if you try to evaluate $rlog r$ at $0$?
$endgroup$
– rych
Jan 29 at 6:55












$begingroup$
You cannot use TPS for data approximation without the linear polynomial in front, of course. See for example Table 3.1. in Armin Iske 2004 Multiresolution Methods in Scattered Data Modelling.
$endgroup$
– rych
Jan 29 at 7:00




$begingroup$
You cannot use TPS for data approximation without the linear polynomial in front, of course. See for example Table 3.1. in Armin Iske 2004 Multiresolution Methods in Scattered Data Modelling.
$endgroup$
– rych
Jan 29 at 7:00












$begingroup$
In the interpolation problem with RBF, such as TPS you have here, there are no "edges". However, for TPS, it's recommended to scale your data into a $[-0.5,0.5]^2$ interval first.
$endgroup$
– rych
Jan 29 at 7:04






$begingroup$
In the interpolation problem with RBF, such as TPS you have here, there are no "edges". However, for TPS, it's recommended to scale your data into a $[-0.5,0.5]^2$ interval first.
$endgroup$
– rych
Jan 29 at 7:04













1












$begingroup$

A "classic TPS spline $(sigma)$ is a continuously differentiable function $sigma in C^1(mathbb{R}^2)$". If you need to approximate second derivatives of the function then you have to use a twice continuously differentiable version of the Duchon spline.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Could you add a formula, or a reference link to a "twice continuously differentiable version of the Duchon spline" please?
    $endgroup$
    – rych
    Feb 12 at 6:17










  • $begingroup$
    Sure. Here we are: R. Arcangéli, Multidimensional Minimizing Splines, Springer, 2004 A. Bezhaev, V. Vasilenko, Variational Theory of Splines, Springer, 2001 J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
    $endgroup$
    – NetTvor
    Feb 12 at 8:42










  • $begingroup$
    Also simple theory of something close to the Duchon splines could be found at normalsplines.blogspot.com/2019/01/…
    $endgroup$
    – NetTvor
    Feb 12 at 8:53










  • $begingroup$
    One month later and I've started reading the references you posted. Спасибо! A good chance to improve my approximation theory education. For now I'm a bit confused: say we reconstruct surface from a point cloud using TPS: isn't it smooth and curvature continuous despite TPS being only once differentiable at the centers? What's your opinion of my answer above?
    $endgroup$
    – rych
    Mar 13 at 10:16
















1












$begingroup$

A "classic TPS spline $(sigma)$ is a continuously differentiable function $sigma in C^1(mathbb{R}^2)$". If you need to approximate second derivatives of the function then you have to use a twice continuously differentiable version of the Duchon spline.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Could you add a formula, or a reference link to a "twice continuously differentiable version of the Duchon spline" please?
    $endgroup$
    – rych
    Feb 12 at 6:17










  • $begingroup$
    Sure. Here we are: R. Arcangéli, Multidimensional Minimizing Splines, Springer, 2004 A. Bezhaev, V. Vasilenko, Variational Theory of Splines, Springer, 2001 J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
    $endgroup$
    – NetTvor
    Feb 12 at 8:42










  • $begingroup$
    Also simple theory of something close to the Duchon splines could be found at normalsplines.blogspot.com/2019/01/…
    $endgroup$
    – NetTvor
    Feb 12 at 8:53










  • $begingroup$
    One month later and I've started reading the references you posted. Спасибо! A good chance to improve my approximation theory education. For now I'm a bit confused: say we reconstruct surface from a point cloud using TPS: isn't it smooth and curvature continuous despite TPS being only once differentiable at the centers? What's your opinion of my answer above?
    $endgroup$
    – rych
    Mar 13 at 10:16














1












1








1





$begingroup$

A "classic TPS spline $(sigma)$ is a continuously differentiable function $sigma in C^1(mathbb{R}^2)$". If you need to approximate second derivatives of the function then you have to use a twice continuously differentiable version of the Duchon spline.






share|cite|improve this answer











$endgroup$



A "classic TPS spline $(sigma)$ is a continuously differentiable function $sigma in C^1(mathbb{R}^2)$". If you need to approximate second derivatives of the function then you have to use a twice continuously differentiable version of the Duchon spline.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Feb 11 at 20:16









dantopa

6,62342245




6,62342245










answered Feb 11 at 17:13









NetTvorNetTvor

111




111












  • $begingroup$
    Could you add a formula, or a reference link to a "twice continuously differentiable version of the Duchon spline" please?
    $endgroup$
    – rych
    Feb 12 at 6:17










  • $begingroup$
    Sure. Here we are: R. Arcangéli, Multidimensional Minimizing Splines, Springer, 2004 A. Bezhaev, V. Vasilenko, Variational Theory of Splines, Springer, 2001 J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
    $endgroup$
    – NetTvor
    Feb 12 at 8:42










  • $begingroup$
    Also simple theory of something close to the Duchon splines could be found at normalsplines.blogspot.com/2019/01/…
    $endgroup$
    – NetTvor
    Feb 12 at 8:53










  • $begingroup$
    One month later and I've started reading the references you posted. Спасибо! A good chance to improve my approximation theory education. For now I'm a bit confused: say we reconstruct surface from a point cloud using TPS: isn't it smooth and curvature continuous despite TPS being only once differentiable at the centers? What's your opinion of my answer above?
    $endgroup$
    – rych
    Mar 13 at 10:16


















  • $begingroup$
    Could you add a formula, or a reference link to a "twice continuously differentiable version of the Duchon spline" please?
    $endgroup$
    – rych
    Feb 12 at 6:17










  • $begingroup$
    Sure. Here we are: R. Arcangéli, Multidimensional Minimizing Splines, Springer, 2004 A. Bezhaev, V. Vasilenko, Variational Theory of Splines, Springer, 2001 J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
    $endgroup$
    – NetTvor
    Feb 12 at 8:42










  • $begingroup$
    Also simple theory of something close to the Duchon splines could be found at normalsplines.blogspot.com/2019/01/…
    $endgroup$
    – NetTvor
    Feb 12 at 8:53










  • $begingroup$
    One month later and I've started reading the references you posted. Спасибо! A good chance to improve my approximation theory education. For now I'm a bit confused: say we reconstruct surface from a point cloud using TPS: isn't it smooth and curvature continuous despite TPS being only once differentiable at the centers? What's your opinion of my answer above?
    $endgroup$
    – rych
    Mar 13 at 10:16
















$begingroup$
Could you add a formula, or a reference link to a "twice continuously differentiable version of the Duchon spline" please?
$endgroup$
– rych
Feb 12 at 6:17




$begingroup$
Could you add a formula, or a reference link to a "twice continuously differentiable version of the Duchon spline" please?
$endgroup$
– rych
Feb 12 at 6:17












$begingroup$
Sure. Here we are: R. Arcangéli, Multidimensional Minimizing Splines, Springer, 2004 A. Bezhaev, V. Vasilenko, Variational Theory of Splines, Springer, 2001 J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
$endgroup$
– NetTvor
Feb 12 at 8:42




$begingroup$
Sure. Here we are: R. Arcangéli, Multidimensional Minimizing Splines, Springer, 2004 A. Bezhaev, V. Vasilenko, Variational Theory of Splines, Springer, 2001 J. Duchon, Splines minimizing rotation-invariant semi-norms in Sobolev spaces, Lect. Notes in Math., Vol. 571, Springer, Berlin, 1977
$endgroup$
– NetTvor
Feb 12 at 8:42












$begingroup$
Also simple theory of something close to the Duchon splines could be found at normalsplines.blogspot.com/2019/01/…
$endgroup$
– NetTvor
Feb 12 at 8:53




$begingroup$
Also simple theory of something close to the Duchon splines could be found at normalsplines.blogspot.com/2019/01/…
$endgroup$
– NetTvor
Feb 12 at 8:53












$begingroup$
One month later and I've started reading the references you posted. Спасибо! A good chance to improve my approximation theory education. For now I'm a bit confused: say we reconstruct surface from a point cloud using TPS: isn't it smooth and curvature continuous despite TPS being only once differentiable at the centers? What's your opinion of my answer above?
$endgroup$
– rych
Mar 13 at 10:16




$begingroup$
One month later and I've started reading the references you posted. Спасибо! A good chance to improve my approximation theory education. For now I'm a bit confused: say we reconstruct surface from a point cloud using TPS: isn't it smooth and curvature continuous despite TPS being only once differentiable at the centers? What's your opinion of my answer above?
$endgroup$
– rych
Mar 13 at 10:16


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3087859%2fhow-does-one-approximate-a-second-derivative-with-atps-interpolation%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

Npm cannot find a required file even through it is in the searched directory