Given a matrix A such that $a_{ij} = frac{1}{i + j - 1}$, prove that A is invertible and that $A^{-1}$ has...
$begingroup$
This question already has an answer here:
Why does the inverse of the Hilbert matrix have integer entries?
2 answers
Essentially just the title. This is supposed to be the introductory Upper Division Linear Algebra class so we haven't covered the determinant yet. I'm mentioning this because I am aware of the theorem that states that the conclusion would follow if the determinant was either 1 or -1. We haven't proven that in class yet so we can't use it. Anyway, if any of you have any ideas or hints, I would greatly appreciate it.
Thank you!
EDIT: Actually, that theorem does not apply in the way I stated. Either way, it cannot be used to solve this problem as it wasn't covered in class.
EDIT 2: This question is different than the other question asking why the elements of the inverse of the Hilbert Matrix are integers because my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses.
linear-algebra matrices linear-transformations inverse
$endgroup$
marked as duplicate by Lord Shark the Unknown, onurcanbektas, Shailesh, A. Pongrácz, José Carlos Santos
StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Jan 22 at 9:06
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
|
show 2 more comments
$begingroup$
This question already has an answer here:
Why does the inverse of the Hilbert matrix have integer entries?
2 answers
Essentially just the title. This is supposed to be the introductory Upper Division Linear Algebra class so we haven't covered the determinant yet. I'm mentioning this because I am aware of the theorem that states that the conclusion would follow if the determinant was either 1 or -1. We haven't proven that in class yet so we can't use it. Anyway, if any of you have any ideas or hints, I would greatly appreciate it.
Thank you!
EDIT: Actually, that theorem does not apply in the way I stated. Either way, it cannot be used to solve this problem as it wasn't covered in class.
EDIT 2: This question is different than the other question asking why the elements of the inverse of the Hilbert Matrix are integers because my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses.
linear-algebra matrices linear-transformations inverse
$endgroup$
marked as duplicate by Lord Shark the Unknown, onurcanbektas, Shailesh, A. Pongrácz, José Carlos Santos
StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Jan 22 at 9:06
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
2
$begingroup$
That theorem only applies to matrices which have integer entries to begin with. In this case, what you would actually need would be that the determinant is $frac{pm 1}{N}$ for some integer $N$ where the denominator of every $(n-1) times (n-1)$ minor determinant divides $N$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:36
$begingroup$
@DanielSchepler Thanks for the response! I must have misread it when I googled similar problems before. I corrected it in the body of the post.
$endgroup$
– Daniel Apsley
Jan 22 at 1:39
1
$begingroup$
For the invertibility part, the most efficient proof I know of is: if $x = (x_0, x_1, ldots, x_{n-1})^T$ were a vector in the null space of $A$, then it turns out $0 = x^T A x = int_0^1 (x_0 + x_1 t + x_2 t^2 + cdots + x_{n-1} t^{n-1})^2 , dt$ implying you must have $x_0 = x_1 = cdots = x_{n-1} = 0$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:41
1
$begingroup$
See here. For reference, this matrix is known as the Hilbert matrix; that thread was the first result in a search for "Hilbert matrix inverse".
$endgroup$
– jmerry
Jan 22 at 2:00
$begingroup$
@jmerry Thanks for that! Knowing what it is is really helpful. Unfortunately, all the proofs of the description of such an inverse are far out of the reach of the class.
$endgroup$
– Daniel Apsley
Jan 22 at 2:24
|
show 2 more comments
$begingroup$
This question already has an answer here:
Why does the inverse of the Hilbert matrix have integer entries?
2 answers
Essentially just the title. This is supposed to be the introductory Upper Division Linear Algebra class so we haven't covered the determinant yet. I'm mentioning this because I am aware of the theorem that states that the conclusion would follow if the determinant was either 1 or -1. We haven't proven that in class yet so we can't use it. Anyway, if any of you have any ideas or hints, I would greatly appreciate it.
Thank you!
EDIT: Actually, that theorem does not apply in the way I stated. Either way, it cannot be used to solve this problem as it wasn't covered in class.
EDIT 2: This question is different than the other question asking why the elements of the inverse of the Hilbert Matrix are integers because my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses.
linear-algebra matrices linear-transformations inverse
$endgroup$
This question already has an answer here:
Why does the inverse of the Hilbert matrix have integer entries?
2 answers
Essentially just the title. This is supposed to be the introductory Upper Division Linear Algebra class so we haven't covered the determinant yet. I'm mentioning this because I am aware of the theorem that states that the conclusion would follow if the determinant was either 1 or -1. We haven't proven that in class yet so we can't use it. Anyway, if any of you have any ideas or hints, I would greatly appreciate it.
Thank you!
EDIT: Actually, that theorem does not apply in the way I stated. Either way, it cannot be used to solve this problem as it wasn't covered in class.
EDIT 2: This question is different than the other question asking why the elements of the inverse of the Hilbert Matrix are integers because my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses.
This question already has an answer here:
Why does the inverse of the Hilbert matrix have integer entries?
2 answers
linear-algebra matrices linear-transformations inverse
linear-algebra matrices linear-transformations inverse
edited Jan 22 at 2:28
Daniel Apsley
asked Jan 22 at 1:31


Daniel ApsleyDaniel Apsley
162
162
marked as duplicate by Lord Shark the Unknown, onurcanbektas, Shailesh, A. Pongrácz, José Carlos Santos
StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Jan 22 at 9:06
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Lord Shark the Unknown, onurcanbektas, Shailesh, A. Pongrácz, José Carlos Santos
StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;
$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');
$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Jan 22 at 9:06
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
2
$begingroup$
That theorem only applies to matrices which have integer entries to begin with. In this case, what you would actually need would be that the determinant is $frac{pm 1}{N}$ for some integer $N$ where the denominator of every $(n-1) times (n-1)$ minor determinant divides $N$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:36
$begingroup$
@DanielSchepler Thanks for the response! I must have misread it when I googled similar problems before. I corrected it in the body of the post.
$endgroup$
– Daniel Apsley
Jan 22 at 1:39
1
$begingroup$
For the invertibility part, the most efficient proof I know of is: if $x = (x_0, x_1, ldots, x_{n-1})^T$ were a vector in the null space of $A$, then it turns out $0 = x^T A x = int_0^1 (x_0 + x_1 t + x_2 t^2 + cdots + x_{n-1} t^{n-1})^2 , dt$ implying you must have $x_0 = x_1 = cdots = x_{n-1} = 0$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:41
1
$begingroup$
See here. For reference, this matrix is known as the Hilbert matrix; that thread was the first result in a search for "Hilbert matrix inverse".
$endgroup$
– jmerry
Jan 22 at 2:00
$begingroup$
@jmerry Thanks for that! Knowing what it is is really helpful. Unfortunately, all the proofs of the description of such an inverse are far out of the reach of the class.
$endgroup$
– Daniel Apsley
Jan 22 at 2:24
|
show 2 more comments
2
$begingroup$
That theorem only applies to matrices which have integer entries to begin with. In this case, what you would actually need would be that the determinant is $frac{pm 1}{N}$ for some integer $N$ where the denominator of every $(n-1) times (n-1)$ minor determinant divides $N$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:36
$begingroup$
@DanielSchepler Thanks for the response! I must have misread it when I googled similar problems before. I corrected it in the body of the post.
$endgroup$
– Daniel Apsley
Jan 22 at 1:39
1
$begingroup$
For the invertibility part, the most efficient proof I know of is: if $x = (x_0, x_1, ldots, x_{n-1})^T$ were a vector in the null space of $A$, then it turns out $0 = x^T A x = int_0^1 (x_0 + x_1 t + x_2 t^2 + cdots + x_{n-1} t^{n-1})^2 , dt$ implying you must have $x_0 = x_1 = cdots = x_{n-1} = 0$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:41
1
$begingroup$
See here. For reference, this matrix is known as the Hilbert matrix; that thread was the first result in a search for "Hilbert matrix inverse".
$endgroup$
– jmerry
Jan 22 at 2:00
$begingroup$
@jmerry Thanks for that! Knowing what it is is really helpful. Unfortunately, all the proofs of the description of such an inverse are far out of the reach of the class.
$endgroup$
– Daniel Apsley
Jan 22 at 2:24
2
2
$begingroup$
That theorem only applies to matrices which have integer entries to begin with. In this case, what you would actually need would be that the determinant is $frac{pm 1}{N}$ for some integer $N$ where the denominator of every $(n-1) times (n-1)$ minor determinant divides $N$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:36
$begingroup$
That theorem only applies to matrices which have integer entries to begin with. In this case, what you would actually need would be that the determinant is $frac{pm 1}{N}$ for some integer $N$ where the denominator of every $(n-1) times (n-1)$ minor determinant divides $N$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:36
$begingroup$
@DanielSchepler Thanks for the response! I must have misread it when I googled similar problems before. I corrected it in the body of the post.
$endgroup$
– Daniel Apsley
Jan 22 at 1:39
$begingroup$
@DanielSchepler Thanks for the response! I must have misread it when I googled similar problems before. I corrected it in the body of the post.
$endgroup$
– Daniel Apsley
Jan 22 at 1:39
1
1
$begingroup$
For the invertibility part, the most efficient proof I know of is: if $x = (x_0, x_1, ldots, x_{n-1})^T$ were a vector in the null space of $A$, then it turns out $0 = x^T A x = int_0^1 (x_0 + x_1 t + x_2 t^2 + cdots + x_{n-1} t^{n-1})^2 , dt$ implying you must have $x_0 = x_1 = cdots = x_{n-1} = 0$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:41
$begingroup$
For the invertibility part, the most efficient proof I know of is: if $x = (x_0, x_1, ldots, x_{n-1})^T$ were a vector in the null space of $A$, then it turns out $0 = x^T A x = int_0^1 (x_0 + x_1 t + x_2 t^2 + cdots + x_{n-1} t^{n-1})^2 , dt$ implying you must have $x_0 = x_1 = cdots = x_{n-1} = 0$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:41
1
1
$begingroup$
See here. For reference, this matrix is known as the Hilbert matrix; that thread was the first result in a search for "Hilbert matrix inverse".
$endgroup$
– jmerry
Jan 22 at 2:00
$begingroup$
See here. For reference, this matrix is known as the Hilbert matrix; that thread was the first result in a search for "Hilbert matrix inverse".
$endgroup$
– jmerry
Jan 22 at 2:00
$begingroup$
@jmerry Thanks for that! Knowing what it is is really helpful. Unfortunately, all the proofs of the description of such an inverse are far out of the reach of the class.
$endgroup$
– Daniel Apsley
Jan 22 at 2:24
$begingroup$
@jmerry Thanks for that! Knowing what it is is really helpful. Unfortunately, all the proofs of the description of such an inverse are far out of the reach of the class.
$endgroup$
– Daniel Apsley
Jan 22 at 2:24
|
show 2 more comments
1 Answer
1
active
oldest
votes
$begingroup$
To abbreviate my post here, a determinant-free proof that the inverse of the Hilbert matrix has integer entries.
Consider the inner product $langle f,grangle =int_0^1 fg$ on nice enough functions. The $ntimes n$ Hilbert matrix $H$ has $ij$ entry (running the labels from zero to $n-1$) $langle x^i,x^jrangle$. This makes it a Gramian matrix. The vectors we used are the usual basis for the polynomials of degree $le n-1$, and from that we can write the inner product of any two such polynomials, written in terms of the usual basis, as $langle f,grangle = f^T H g$.
Let $P$ be an upper triangular matrix with its columns orthonormal with respect to this inner product - the columns are exactly what we get by applying Gram-Schmidt to the standard basis. Then $P^T H P = I$. Move some things around, and $H=(P^T)^{-1} P^{-1}$, so $H^{-1}=Pcdot P^T$.
That's all very general stuff - but we started with something special. We already know a lot about polynomials - in particular, we have an explicit family of orthogonal polynomials with respect to this product, the Legendre polynomials. The usual inner product in the definition there uses the interval $[-1,1]$, but going to $[0,1]$ is just an affine substitution. Legendre polynomials in the standard scaling are normalized to values of $pm 1$ at the endpoints - here, that would be $q_k(x) = frac{1}{k!}frac{d^k}{dx^k}left((x-x^2)^kright)$. We want orthonormal scaling, so we note that
$$langle q_k,q_krangle =int_0^1 frac{1}{(k!)^2}frac{d^k}{dx^k}left((x-x^2)^kright)cdot frac{d^k}{dx^k}left((x-x^2)^kright),dx$$
Integrating by parts $k$ times, we get
$$langle q_k,q_krangle =frac{(-1)^k}{(k!)^2}int_0^1 left((x-x^2)^kright)cdot frac{d^{2k}}{dx^{2k}}left((x-x^2)^kright),dx = binom{2k}{k}int_0^1 x^k(1-k)^x,dx$$
Integrate that by parts another $k$ times, and it becomes
$$langle q_k,q_krangle = binom{2k}{k}int_0^1 frac{k!x^k}{(2k)!}cdot (k!),dx = int_0^1 x^{2k},dx=frac1{2k+1}$$
To get our orthonormal scaling, we then divide $q_k$ by the square root of this number and get $p_k = sqrt{2k+1}q_k$.
Now, we also note that the $q_k$ are all polynomials with integer coefficients; the operator $frac{1}{k!}frac{d^k}{dx^k}$ preserves integer polynomials. So then, we can write
$$P=begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}$$
$$Pcdot P^T = begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}^2begin{pmatrix}q_0^T \ q_1 \ cdots \ q_{n-1}end{pmatrix}$$
$$H^{-1} = Pcdot P^T = Qbegin{pmatrix}1&0&cdots&0\0&3&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots&2n-1end{pmatrix}Q^T$$
All three terms of that last product are matrices with integer coefficients, and $H^{-1}$ must be an integer matrix. Of course, now that we have this explicit $LU$ factorization for the inverse, we can calculate all sorts of things about it easily, such as the sum referenced in the linked thread.
$endgroup$
1
$begingroup$
OP says, "my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses." I don't know if this qualifies.
$endgroup$
– Gerry Myerson
Jan 22 at 3:38
$begingroup$
I saw the clarification in the comments, and glanced over the version edited into the question. This is ... it's more than those specific tools, but is it any more advanced? The calculus is all understandable with the standard single-variable year. The concept of inner products? That probably hasn't come up yet in the linear algebra course, but the dot product in two or three dimensions has come up earlier, and inner products are the natural extension. Gram-Schmidt is a variation on row-reduction - instead of zeroing entries, we do it to inner products. And we didn't even actually use it.
$endgroup$
– jmerry
Jan 22 at 3:57
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
To abbreviate my post here, a determinant-free proof that the inverse of the Hilbert matrix has integer entries.
Consider the inner product $langle f,grangle =int_0^1 fg$ on nice enough functions. The $ntimes n$ Hilbert matrix $H$ has $ij$ entry (running the labels from zero to $n-1$) $langle x^i,x^jrangle$. This makes it a Gramian matrix. The vectors we used are the usual basis for the polynomials of degree $le n-1$, and from that we can write the inner product of any two such polynomials, written in terms of the usual basis, as $langle f,grangle = f^T H g$.
Let $P$ be an upper triangular matrix with its columns orthonormal with respect to this inner product - the columns are exactly what we get by applying Gram-Schmidt to the standard basis. Then $P^T H P = I$. Move some things around, and $H=(P^T)^{-1} P^{-1}$, so $H^{-1}=Pcdot P^T$.
That's all very general stuff - but we started with something special. We already know a lot about polynomials - in particular, we have an explicit family of orthogonal polynomials with respect to this product, the Legendre polynomials. The usual inner product in the definition there uses the interval $[-1,1]$, but going to $[0,1]$ is just an affine substitution. Legendre polynomials in the standard scaling are normalized to values of $pm 1$ at the endpoints - here, that would be $q_k(x) = frac{1}{k!}frac{d^k}{dx^k}left((x-x^2)^kright)$. We want orthonormal scaling, so we note that
$$langle q_k,q_krangle =int_0^1 frac{1}{(k!)^2}frac{d^k}{dx^k}left((x-x^2)^kright)cdot frac{d^k}{dx^k}left((x-x^2)^kright),dx$$
Integrating by parts $k$ times, we get
$$langle q_k,q_krangle =frac{(-1)^k}{(k!)^2}int_0^1 left((x-x^2)^kright)cdot frac{d^{2k}}{dx^{2k}}left((x-x^2)^kright),dx = binom{2k}{k}int_0^1 x^k(1-k)^x,dx$$
Integrate that by parts another $k$ times, and it becomes
$$langle q_k,q_krangle = binom{2k}{k}int_0^1 frac{k!x^k}{(2k)!}cdot (k!),dx = int_0^1 x^{2k},dx=frac1{2k+1}$$
To get our orthonormal scaling, we then divide $q_k$ by the square root of this number and get $p_k = sqrt{2k+1}q_k$.
Now, we also note that the $q_k$ are all polynomials with integer coefficients; the operator $frac{1}{k!}frac{d^k}{dx^k}$ preserves integer polynomials. So then, we can write
$$P=begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}$$
$$Pcdot P^T = begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}^2begin{pmatrix}q_0^T \ q_1 \ cdots \ q_{n-1}end{pmatrix}$$
$$H^{-1} = Pcdot P^T = Qbegin{pmatrix}1&0&cdots&0\0&3&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots&2n-1end{pmatrix}Q^T$$
All three terms of that last product are matrices with integer coefficients, and $H^{-1}$ must be an integer matrix. Of course, now that we have this explicit $LU$ factorization for the inverse, we can calculate all sorts of things about it easily, such as the sum referenced in the linked thread.
$endgroup$
1
$begingroup$
OP says, "my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses." I don't know if this qualifies.
$endgroup$
– Gerry Myerson
Jan 22 at 3:38
$begingroup$
I saw the clarification in the comments, and glanced over the version edited into the question. This is ... it's more than those specific tools, but is it any more advanced? The calculus is all understandable with the standard single-variable year. The concept of inner products? That probably hasn't come up yet in the linear algebra course, but the dot product in two or three dimensions has come up earlier, and inner products are the natural extension. Gram-Schmidt is a variation on row-reduction - instead of zeroing entries, we do it to inner products. And we didn't even actually use it.
$endgroup$
– jmerry
Jan 22 at 3:57
add a comment |
$begingroup$
To abbreviate my post here, a determinant-free proof that the inverse of the Hilbert matrix has integer entries.
Consider the inner product $langle f,grangle =int_0^1 fg$ on nice enough functions. The $ntimes n$ Hilbert matrix $H$ has $ij$ entry (running the labels from zero to $n-1$) $langle x^i,x^jrangle$. This makes it a Gramian matrix. The vectors we used are the usual basis for the polynomials of degree $le n-1$, and from that we can write the inner product of any two such polynomials, written in terms of the usual basis, as $langle f,grangle = f^T H g$.
Let $P$ be an upper triangular matrix with its columns orthonormal with respect to this inner product - the columns are exactly what we get by applying Gram-Schmidt to the standard basis. Then $P^T H P = I$. Move some things around, and $H=(P^T)^{-1} P^{-1}$, so $H^{-1}=Pcdot P^T$.
That's all very general stuff - but we started with something special. We already know a lot about polynomials - in particular, we have an explicit family of orthogonal polynomials with respect to this product, the Legendre polynomials. The usual inner product in the definition there uses the interval $[-1,1]$, but going to $[0,1]$ is just an affine substitution. Legendre polynomials in the standard scaling are normalized to values of $pm 1$ at the endpoints - here, that would be $q_k(x) = frac{1}{k!}frac{d^k}{dx^k}left((x-x^2)^kright)$. We want orthonormal scaling, so we note that
$$langle q_k,q_krangle =int_0^1 frac{1}{(k!)^2}frac{d^k}{dx^k}left((x-x^2)^kright)cdot frac{d^k}{dx^k}left((x-x^2)^kright),dx$$
Integrating by parts $k$ times, we get
$$langle q_k,q_krangle =frac{(-1)^k}{(k!)^2}int_0^1 left((x-x^2)^kright)cdot frac{d^{2k}}{dx^{2k}}left((x-x^2)^kright),dx = binom{2k}{k}int_0^1 x^k(1-k)^x,dx$$
Integrate that by parts another $k$ times, and it becomes
$$langle q_k,q_krangle = binom{2k}{k}int_0^1 frac{k!x^k}{(2k)!}cdot (k!),dx = int_0^1 x^{2k},dx=frac1{2k+1}$$
To get our orthonormal scaling, we then divide $q_k$ by the square root of this number and get $p_k = sqrt{2k+1}q_k$.
Now, we also note that the $q_k$ are all polynomials with integer coefficients; the operator $frac{1}{k!}frac{d^k}{dx^k}$ preserves integer polynomials. So then, we can write
$$P=begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}$$
$$Pcdot P^T = begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}^2begin{pmatrix}q_0^T \ q_1 \ cdots \ q_{n-1}end{pmatrix}$$
$$H^{-1} = Pcdot P^T = Qbegin{pmatrix}1&0&cdots&0\0&3&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots&2n-1end{pmatrix}Q^T$$
All three terms of that last product are matrices with integer coefficients, and $H^{-1}$ must be an integer matrix. Of course, now that we have this explicit $LU$ factorization for the inverse, we can calculate all sorts of things about it easily, such as the sum referenced in the linked thread.
$endgroup$
1
$begingroup$
OP says, "my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses." I don't know if this qualifies.
$endgroup$
– Gerry Myerson
Jan 22 at 3:38
$begingroup$
I saw the clarification in the comments, and glanced over the version edited into the question. This is ... it's more than those specific tools, but is it any more advanced? The calculus is all understandable with the standard single-variable year. The concept of inner products? That probably hasn't come up yet in the linear algebra course, but the dot product in two or three dimensions has come up earlier, and inner products are the natural extension. Gram-Schmidt is a variation on row-reduction - instead of zeroing entries, we do it to inner products. And we didn't even actually use it.
$endgroup$
– jmerry
Jan 22 at 3:57
add a comment |
$begingroup$
To abbreviate my post here, a determinant-free proof that the inverse of the Hilbert matrix has integer entries.
Consider the inner product $langle f,grangle =int_0^1 fg$ on nice enough functions. The $ntimes n$ Hilbert matrix $H$ has $ij$ entry (running the labels from zero to $n-1$) $langle x^i,x^jrangle$. This makes it a Gramian matrix. The vectors we used are the usual basis for the polynomials of degree $le n-1$, and from that we can write the inner product of any two such polynomials, written in terms of the usual basis, as $langle f,grangle = f^T H g$.
Let $P$ be an upper triangular matrix with its columns orthonormal with respect to this inner product - the columns are exactly what we get by applying Gram-Schmidt to the standard basis. Then $P^T H P = I$. Move some things around, and $H=(P^T)^{-1} P^{-1}$, so $H^{-1}=Pcdot P^T$.
That's all very general stuff - but we started with something special. We already know a lot about polynomials - in particular, we have an explicit family of orthogonal polynomials with respect to this product, the Legendre polynomials. The usual inner product in the definition there uses the interval $[-1,1]$, but going to $[0,1]$ is just an affine substitution. Legendre polynomials in the standard scaling are normalized to values of $pm 1$ at the endpoints - here, that would be $q_k(x) = frac{1}{k!}frac{d^k}{dx^k}left((x-x^2)^kright)$. We want orthonormal scaling, so we note that
$$langle q_k,q_krangle =int_0^1 frac{1}{(k!)^2}frac{d^k}{dx^k}left((x-x^2)^kright)cdot frac{d^k}{dx^k}left((x-x^2)^kright),dx$$
Integrating by parts $k$ times, we get
$$langle q_k,q_krangle =frac{(-1)^k}{(k!)^2}int_0^1 left((x-x^2)^kright)cdot frac{d^{2k}}{dx^{2k}}left((x-x^2)^kright),dx = binom{2k}{k}int_0^1 x^k(1-k)^x,dx$$
Integrate that by parts another $k$ times, and it becomes
$$langle q_k,q_krangle = binom{2k}{k}int_0^1 frac{k!x^k}{(2k)!}cdot (k!),dx = int_0^1 x^{2k},dx=frac1{2k+1}$$
To get our orthonormal scaling, we then divide $q_k$ by the square root of this number and get $p_k = sqrt{2k+1}q_k$.
Now, we also note that the $q_k$ are all polynomials with integer coefficients; the operator $frac{1}{k!}frac{d^k}{dx^k}$ preserves integer polynomials. So then, we can write
$$P=begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}$$
$$Pcdot P^T = begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}^2begin{pmatrix}q_0^T \ q_1 \ cdots \ q_{n-1}end{pmatrix}$$
$$H^{-1} = Pcdot P^T = Qbegin{pmatrix}1&0&cdots&0\0&3&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots&2n-1end{pmatrix}Q^T$$
All three terms of that last product are matrices with integer coefficients, and $H^{-1}$ must be an integer matrix. Of course, now that we have this explicit $LU$ factorization for the inverse, we can calculate all sorts of things about it easily, such as the sum referenced in the linked thread.
$endgroup$
To abbreviate my post here, a determinant-free proof that the inverse of the Hilbert matrix has integer entries.
Consider the inner product $langle f,grangle =int_0^1 fg$ on nice enough functions. The $ntimes n$ Hilbert matrix $H$ has $ij$ entry (running the labels from zero to $n-1$) $langle x^i,x^jrangle$. This makes it a Gramian matrix. The vectors we used are the usual basis for the polynomials of degree $le n-1$, and from that we can write the inner product of any two such polynomials, written in terms of the usual basis, as $langle f,grangle = f^T H g$.
Let $P$ be an upper triangular matrix with its columns orthonormal with respect to this inner product - the columns are exactly what we get by applying Gram-Schmidt to the standard basis. Then $P^T H P = I$. Move some things around, and $H=(P^T)^{-1} P^{-1}$, so $H^{-1}=Pcdot P^T$.
That's all very general stuff - but we started with something special. We already know a lot about polynomials - in particular, we have an explicit family of orthogonal polynomials with respect to this product, the Legendre polynomials. The usual inner product in the definition there uses the interval $[-1,1]$, but going to $[0,1]$ is just an affine substitution. Legendre polynomials in the standard scaling are normalized to values of $pm 1$ at the endpoints - here, that would be $q_k(x) = frac{1}{k!}frac{d^k}{dx^k}left((x-x^2)^kright)$. We want orthonormal scaling, so we note that
$$langle q_k,q_krangle =int_0^1 frac{1}{(k!)^2}frac{d^k}{dx^k}left((x-x^2)^kright)cdot frac{d^k}{dx^k}left((x-x^2)^kright),dx$$
Integrating by parts $k$ times, we get
$$langle q_k,q_krangle =frac{(-1)^k}{(k!)^2}int_0^1 left((x-x^2)^kright)cdot frac{d^{2k}}{dx^{2k}}left((x-x^2)^kright),dx = binom{2k}{k}int_0^1 x^k(1-k)^x,dx$$
Integrate that by parts another $k$ times, and it becomes
$$langle q_k,q_krangle = binom{2k}{k}int_0^1 frac{k!x^k}{(2k)!}cdot (k!),dx = int_0^1 x^{2k},dx=frac1{2k+1}$$
To get our orthonormal scaling, we then divide $q_k$ by the square root of this number and get $p_k = sqrt{2k+1}q_k$.
Now, we also note that the $q_k$ are all polynomials with integer coefficients; the operator $frac{1}{k!}frac{d^k}{dx^k}$ preserves integer polynomials. So then, we can write
$$P=begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}$$
$$Pcdot P^T = begin{pmatrix}q_0 & q_1 &cdots & q_{n-1}end{pmatrix} begin{pmatrix}1&0&cdots&0\0&sqrt{3}&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots & sqrt{2n-1}end{pmatrix}^2begin{pmatrix}q_0^T \ q_1 \ cdots \ q_{n-1}end{pmatrix}$$
$$H^{-1} = Pcdot P^T = Qbegin{pmatrix}1&0&cdots&0\0&3&cdots&0\ vdots&vdots&ddots&vdots\ 0&0&cdots&2n-1end{pmatrix}Q^T$$
All three terms of that last product are matrices with integer coefficients, and $H^{-1}$ must be an integer matrix. Of course, now that we have this explicit $LU$ factorization for the inverse, we can calculate all sorts of things about it easily, such as the sum referenced in the linked thread.
edited Jan 22 at 3:52
answered Jan 22 at 3:12


jmerryjmerry
12.8k1628
12.8k1628
1
$begingroup$
OP says, "my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses." I don't know if this qualifies.
$endgroup$
– Gerry Myerson
Jan 22 at 3:38
$begingroup$
I saw the clarification in the comments, and glanced over the version edited into the question. This is ... it's more than those specific tools, but is it any more advanced? The calculus is all understandable with the standard single-variable year. The concept of inner products? That probably hasn't come up yet in the linear algebra course, but the dot product in two or three dimensions has come up earlier, and inner products are the natural extension. Gram-Schmidt is a variation on row-reduction - instead of zeroing entries, we do it to inner products. And we didn't even actually use it.
$endgroup$
– jmerry
Jan 22 at 3:57
add a comment |
1
$begingroup$
OP says, "my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses." I don't know if this qualifies.
$endgroup$
– Gerry Myerson
Jan 22 at 3:38
$begingroup$
I saw the clarification in the comments, and glanced over the version edited into the question. This is ... it's more than those specific tools, but is it any more advanced? The calculus is all understandable with the standard single-variable year. The concept of inner products? That probably hasn't come up yet in the linear algebra course, but the dot product in two or three dimensions has come up earlier, and inner products are the natural extension. Gram-Schmidt is a variation on row-reduction - instead of zeroing entries, we do it to inner products. And we didn't even actually use it.
$endgroup$
– jmerry
Jan 22 at 3:57
1
1
$begingroup$
OP says, "my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses." I don't know if this qualifies.
$endgroup$
– Gerry Myerson
Jan 22 at 3:38
$begingroup$
OP says, "my solution cannot include much outside of row reduction (elementary matrices), the definition of matrix multiplication, and the basic properties of matrix inverses." I don't know if this qualifies.
$endgroup$
– Gerry Myerson
Jan 22 at 3:38
$begingroup$
I saw the clarification in the comments, and glanced over the version edited into the question. This is ... it's more than those specific tools, but is it any more advanced? The calculus is all understandable with the standard single-variable year. The concept of inner products? That probably hasn't come up yet in the linear algebra course, but the dot product in two or three dimensions has come up earlier, and inner products are the natural extension. Gram-Schmidt is a variation on row-reduction - instead of zeroing entries, we do it to inner products. And we didn't even actually use it.
$endgroup$
– jmerry
Jan 22 at 3:57
$begingroup$
I saw the clarification in the comments, and glanced over the version edited into the question. This is ... it's more than those specific tools, but is it any more advanced? The calculus is all understandable with the standard single-variable year. The concept of inner products? That probably hasn't come up yet in the linear algebra course, but the dot product in two or three dimensions has come up earlier, and inner products are the natural extension. Gram-Schmidt is a variation on row-reduction - instead of zeroing entries, we do it to inner products. And we didn't even actually use it.
$endgroup$
– jmerry
Jan 22 at 3:57
add a comment |
2
$begingroup$
That theorem only applies to matrices which have integer entries to begin with. In this case, what you would actually need would be that the determinant is $frac{pm 1}{N}$ for some integer $N$ where the denominator of every $(n-1) times (n-1)$ minor determinant divides $N$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:36
$begingroup$
@DanielSchepler Thanks for the response! I must have misread it when I googled similar problems before. I corrected it in the body of the post.
$endgroup$
– Daniel Apsley
Jan 22 at 1:39
1
$begingroup$
For the invertibility part, the most efficient proof I know of is: if $x = (x_0, x_1, ldots, x_{n-1})^T$ were a vector in the null space of $A$, then it turns out $0 = x^T A x = int_0^1 (x_0 + x_1 t + x_2 t^2 + cdots + x_{n-1} t^{n-1})^2 , dt$ implying you must have $x_0 = x_1 = cdots = x_{n-1} = 0$.
$endgroup$
– Daniel Schepler
Jan 22 at 1:41
1
$begingroup$
See here. For reference, this matrix is known as the Hilbert matrix; that thread was the first result in a search for "Hilbert matrix inverse".
$endgroup$
– jmerry
Jan 22 at 2:00
$begingroup$
@jmerry Thanks for that! Knowing what it is is really helpful. Unfortunately, all the proofs of the description of such an inverse are far out of the reach of the class.
$endgroup$
– Daniel Apsley
Jan 22 at 2:24