How do we know the gamma function and Riemann zeta function combine in such a nice way?
$begingroup$
Let $zeta(s) = prodlimits_p (1 - p^{-s})^{-1}$ be the Riemann zeta function. If we define $L(s) = pi^{-frac{s}{2}}Gamma(frac{s}{2})zeta(s)$, then $L$ admits a meromorphic continuation to the complex plane satisfying the functional equation
$$L(s) = L(1-s)$$
The proof techniques are well established, but at the time were very sophisticated, making use of the Fourier transform, Poisson summation, and complex line integrals.
My question is, how could anyone have known to thought to combine the gamma function and the Riemann zeta function in this way?
I see that the proof works, but I don't see how anyone could have ever thought of doing this. The same idea has carried over to prove the meromorphic continuation of other L-functions, but Riemann's combination of the gamma and zeta function seems to be the first of its kind.
complex-analysis number-theory riemann-zeta
$endgroup$
add a comment |
$begingroup$
Let $zeta(s) = prodlimits_p (1 - p^{-s})^{-1}$ be the Riemann zeta function. If we define $L(s) = pi^{-frac{s}{2}}Gamma(frac{s}{2})zeta(s)$, then $L$ admits a meromorphic continuation to the complex plane satisfying the functional equation
$$L(s) = L(1-s)$$
The proof techniques are well established, but at the time were very sophisticated, making use of the Fourier transform, Poisson summation, and complex line integrals.
My question is, how could anyone have known to thought to combine the gamma function and the Riemann zeta function in this way?
I see that the proof works, but I don't see how anyone could have ever thought of doing this. The same idea has carried over to prove the meromorphic continuation of other L-functions, but Riemann's combination of the gamma and zeta function seems to be the first of its kind.
complex-analysis number-theory riemann-zeta
$endgroup$
1
$begingroup$
The easiest way to start with the functional equation is from the Fourier series $frac12-lfloor x rfloor+x = sum_{n=1}^infty frac{sin(2pi nx)}{pi n} $ obtained from the Taylor series of $-log(1-z)$. Then $zeta(s) = s int_0^infty (frac12-lfloor x rfloor+x)x^{-s-1}dx = sum_{n=1}^infty s int_0^infty frac{sin(2pi nx)}{pi n}x^{-s-1}dx=zeta(1-s) s int_0^inftyfrac{sin(pi x)}{pi}x^{-s-1}dx$. The $Gamma(s/2)$+Poisson summation formula is more natural in the context of modular forms, to explain how it generalizes you'd need Tate's thesis, adeles and places at $infty$
$endgroup$
– reuns
Jan 31 at 15:15
add a comment |
$begingroup$
Let $zeta(s) = prodlimits_p (1 - p^{-s})^{-1}$ be the Riemann zeta function. If we define $L(s) = pi^{-frac{s}{2}}Gamma(frac{s}{2})zeta(s)$, then $L$ admits a meromorphic continuation to the complex plane satisfying the functional equation
$$L(s) = L(1-s)$$
The proof techniques are well established, but at the time were very sophisticated, making use of the Fourier transform, Poisson summation, and complex line integrals.
My question is, how could anyone have known to thought to combine the gamma function and the Riemann zeta function in this way?
I see that the proof works, but I don't see how anyone could have ever thought of doing this. The same idea has carried over to prove the meromorphic continuation of other L-functions, but Riemann's combination of the gamma and zeta function seems to be the first of its kind.
complex-analysis number-theory riemann-zeta
$endgroup$
Let $zeta(s) = prodlimits_p (1 - p^{-s})^{-1}$ be the Riemann zeta function. If we define $L(s) = pi^{-frac{s}{2}}Gamma(frac{s}{2})zeta(s)$, then $L$ admits a meromorphic continuation to the complex plane satisfying the functional equation
$$L(s) = L(1-s)$$
The proof techniques are well established, but at the time were very sophisticated, making use of the Fourier transform, Poisson summation, and complex line integrals.
My question is, how could anyone have known to thought to combine the gamma function and the Riemann zeta function in this way?
I see that the proof works, but I don't see how anyone could have ever thought of doing this. The same idea has carried over to prove the meromorphic continuation of other L-functions, but Riemann's combination of the gamma and zeta function seems to be the first of its kind.
complex-analysis number-theory riemann-zeta
complex-analysis number-theory riemann-zeta
asked Jan 31 at 15:00
D_SD_S
14.2k61653
14.2k61653
1
$begingroup$
The easiest way to start with the functional equation is from the Fourier series $frac12-lfloor x rfloor+x = sum_{n=1}^infty frac{sin(2pi nx)}{pi n} $ obtained from the Taylor series of $-log(1-z)$. Then $zeta(s) = s int_0^infty (frac12-lfloor x rfloor+x)x^{-s-1}dx = sum_{n=1}^infty s int_0^infty frac{sin(2pi nx)}{pi n}x^{-s-1}dx=zeta(1-s) s int_0^inftyfrac{sin(pi x)}{pi}x^{-s-1}dx$. The $Gamma(s/2)$+Poisson summation formula is more natural in the context of modular forms, to explain how it generalizes you'd need Tate's thesis, adeles and places at $infty$
$endgroup$
– reuns
Jan 31 at 15:15
add a comment |
1
$begingroup$
The easiest way to start with the functional equation is from the Fourier series $frac12-lfloor x rfloor+x = sum_{n=1}^infty frac{sin(2pi nx)}{pi n} $ obtained from the Taylor series of $-log(1-z)$. Then $zeta(s) = s int_0^infty (frac12-lfloor x rfloor+x)x^{-s-1}dx = sum_{n=1}^infty s int_0^infty frac{sin(2pi nx)}{pi n}x^{-s-1}dx=zeta(1-s) s int_0^inftyfrac{sin(pi x)}{pi}x^{-s-1}dx$. The $Gamma(s/2)$+Poisson summation formula is more natural in the context of modular forms, to explain how it generalizes you'd need Tate's thesis, adeles and places at $infty$
$endgroup$
– reuns
Jan 31 at 15:15
1
1
$begingroup$
The easiest way to start with the functional equation is from the Fourier series $frac12-lfloor x rfloor+x = sum_{n=1}^infty frac{sin(2pi nx)}{pi n} $ obtained from the Taylor series of $-log(1-z)$. Then $zeta(s) = s int_0^infty (frac12-lfloor x rfloor+x)x^{-s-1}dx = sum_{n=1}^infty s int_0^infty frac{sin(2pi nx)}{pi n}x^{-s-1}dx=zeta(1-s) s int_0^inftyfrac{sin(pi x)}{pi}x^{-s-1}dx$. The $Gamma(s/2)$+Poisson summation formula is more natural in the context of modular forms, to explain how it generalizes you'd need Tate's thesis, adeles and places at $infty$
$endgroup$
– reuns
Jan 31 at 15:15
$begingroup$
The easiest way to start with the functional equation is from the Fourier series $frac12-lfloor x rfloor+x = sum_{n=1}^infty frac{sin(2pi nx)}{pi n} $ obtained from the Taylor series of $-log(1-z)$. Then $zeta(s) = s int_0^infty (frac12-lfloor x rfloor+x)x^{-s-1}dx = sum_{n=1}^infty s int_0^infty frac{sin(2pi nx)}{pi n}x^{-s-1}dx=zeta(1-s) s int_0^inftyfrac{sin(pi x)}{pi}x^{-s-1}dx$. The $Gamma(s/2)$+Poisson summation formula is more natural in the context of modular forms, to explain how it generalizes you'd need Tate's thesis, adeles and places at $infty$
$endgroup$
– reuns
Jan 31 at 15:15
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Skimming the details of the proof since you seem to be familiar with it: the starting point to the classical proof of the functional equation is the expression $$zeta(s) Gamma(s) = int_0^infty frac {t^{s-1}} {e^t-1} , dt. tag{$*$} $$ Already we can see the presence of the gamma function. Unfortunately this still only converges for $text{Re}(s)>1$, so we haven't done any better than our original sum form. However, Riemann had the idea of deforming this into a contour integral $$zeta(s) Gamma(s) = frac i {2 sin (pi s)} int_C frac {(-z)^{s-1}} {e^z-1} , dz,$$ with the contour $C$ coming from positive infinity along the real line slightly in the upper half-plane, circling around the origin in the positive (counterclockwise) direction, and returning to positive infinity along the real line in the lower half-plane. This now looks like something we could evaluate using the residue theorem. In fact the contour as it is presents us with some challenges, but by deforming it suitably we can in fact do just this, and it turns out that the singularities will be at nonzero integral multiples of $2 pi i$ with residues proportional (up to a function of $s$ independent of the integer $n$ in question) to $n^{1-s}$. Summing over these, we formally have a relationship between $zeta(s) Gamma(s)$ and $zeta(1-s)$, which is excitingly close to what we're looking for; in fact filling in the factors we skipped over we get $$zeta(1-s) = 2^{1-s} pi^{-s} cos left( frac {pi s}2 right) zeta(s) Gamma(s).$$ This is a perfectly fine form for a functional equation, but to get $L$ properly you can apply the duplication formula (I think Legendre's, definitely before Riemann). I was going to write out the derivation, but realized that a) it's not terribly relevant to the question and b) I'm bad at algebra, but it can be done.
As to where the initial equation $(*)$ comes from or why it's natural: if you start with the gamma function, by a change of variables you can naturally involve a factor of $n^{-s}$; and at this point to a mathematician of the nineteenth century the obvious thing to do is to sum over $n ge 1$, giving us the formula.
$endgroup$
$begingroup$
Indeed Riemann was very talented with complex analysis, contour integrals, residue theorem, probably a little less with Fourier analysis, and he didn't give the Fourier analysis clues for what was happening in his functional equation, density of zeros, explicit formula and Riemann hypothesis. Complex analysis and Fourier analysis both were invented during 1820-1840.
$endgroup$
– reuns
Jan 31 at 17:15
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3094976%2fhow-do-we-know-the-gamma-function-and-riemann-zeta-function-combine-in-such-a-ni%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Skimming the details of the proof since you seem to be familiar with it: the starting point to the classical proof of the functional equation is the expression $$zeta(s) Gamma(s) = int_0^infty frac {t^{s-1}} {e^t-1} , dt. tag{$*$} $$ Already we can see the presence of the gamma function. Unfortunately this still only converges for $text{Re}(s)>1$, so we haven't done any better than our original sum form. However, Riemann had the idea of deforming this into a contour integral $$zeta(s) Gamma(s) = frac i {2 sin (pi s)} int_C frac {(-z)^{s-1}} {e^z-1} , dz,$$ with the contour $C$ coming from positive infinity along the real line slightly in the upper half-plane, circling around the origin in the positive (counterclockwise) direction, and returning to positive infinity along the real line in the lower half-plane. This now looks like something we could evaluate using the residue theorem. In fact the contour as it is presents us with some challenges, but by deforming it suitably we can in fact do just this, and it turns out that the singularities will be at nonzero integral multiples of $2 pi i$ with residues proportional (up to a function of $s$ independent of the integer $n$ in question) to $n^{1-s}$. Summing over these, we formally have a relationship between $zeta(s) Gamma(s)$ and $zeta(1-s)$, which is excitingly close to what we're looking for; in fact filling in the factors we skipped over we get $$zeta(1-s) = 2^{1-s} pi^{-s} cos left( frac {pi s}2 right) zeta(s) Gamma(s).$$ This is a perfectly fine form for a functional equation, but to get $L$ properly you can apply the duplication formula (I think Legendre's, definitely before Riemann). I was going to write out the derivation, but realized that a) it's not terribly relevant to the question and b) I'm bad at algebra, but it can be done.
As to where the initial equation $(*)$ comes from or why it's natural: if you start with the gamma function, by a change of variables you can naturally involve a factor of $n^{-s}$; and at this point to a mathematician of the nineteenth century the obvious thing to do is to sum over $n ge 1$, giving us the formula.
$endgroup$
$begingroup$
Indeed Riemann was very talented with complex analysis, contour integrals, residue theorem, probably a little less with Fourier analysis, and he didn't give the Fourier analysis clues for what was happening in his functional equation, density of zeros, explicit formula and Riemann hypothesis. Complex analysis and Fourier analysis both were invented during 1820-1840.
$endgroup$
– reuns
Jan 31 at 17:15
add a comment |
$begingroup$
Skimming the details of the proof since you seem to be familiar with it: the starting point to the classical proof of the functional equation is the expression $$zeta(s) Gamma(s) = int_0^infty frac {t^{s-1}} {e^t-1} , dt. tag{$*$} $$ Already we can see the presence of the gamma function. Unfortunately this still only converges for $text{Re}(s)>1$, so we haven't done any better than our original sum form. However, Riemann had the idea of deforming this into a contour integral $$zeta(s) Gamma(s) = frac i {2 sin (pi s)} int_C frac {(-z)^{s-1}} {e^z-1} , dz,$$ with the contour $C$ coming from positive infinity along the real line slightly in the upper half-plane, circling around the origin in the positive (counterclockwise) direction, and returning to positive infinity along the real line in the lower half-plane. This now looks like something we could evaluate using the residue theorem. In fact the contour as it is presents us with some challenges, but by deforming it suitably we can in fact do just this, and it turns out that the singularities will be at nonzero integral multiples of $2 pi i$ with residues proportional (up to a function of $s$ independent of the integer $n$ in question) to $n^{1-s}$. Summing over these, we formally have a relationship between $zeta(s) Gamma(s)$ and $zeta(1-s)$, which is excitingly close to what we're looking for; in fact filling in the factors we skipped over we get $$zeta(1-s) = 2^{1-s} pi^{-s} cos left( frac {pi s}2 right) zeta(s) Gamma(s).$$ This is a perfectly fine form for a functional equation, but to get $L$ properly you can apply the duplication formula (I think Legendre's, definitely before Riemann). I was going to write out the derivation, but realized that a) it's not terribly relevant to the question and b) I'm bad at algebra, but it can be done.
As to where the initial equation $(*)$ comes from or why it's natural: if you start with the gamma function, by a change of variables you can naturally involve a factor of $n^{-s}$; and at this point to a mathematician of the nineteenth century the obvious thing to do is to sum over $n ge 1$, giving us the formula.
$endgroup$
$begingroup$
Indeed Riemann was very talented with complex analysis, contour integrals, residue theorem, probably a little less with Fourier analysis, and he didn't give the Fourier analysis clues for what was happening in his functional equation, density of zeros, explicit formula and Riemann hypothesis. Complex analysis and Fourier analysis both were invented during 1820-1840.
$endgroup$
– reuns
Jan 31 at 17:15
add a comment |
$begingroup$
Skimming the details of the proof since you seem to be familiar with it: the starting point to the classical proof of the functional equation is the expression $$zeta(s) Gamma(s) = int_0^infty frac {t^{s-1}} {e^t-1} , dt. tag{$*$} $$ Already we can see the presence of the gamma function. Unfortunately this still only converges for $text{Re}(s)>1$, so we haven't done any better than our original sum form. However, Riemann had the idea of deforming this into a contour integral $$zeta(s) Gamma(s) = frac i {2 sin (pi s)} int_C frac {(-z)^{s-1}} {e^z-1} , dz,$$ with the contour $C$ coming from positive infinity along the real line slightly in the upper half-plane, circling around the origin in the positive (counterclockwise) direction, and returning to positive infinity along the real line in the lower half-plane. This now looks like something we could evaluate using the residue theorem. In fact the contour as it is presents us with some challenges, but by deforming it suitably we can in fact do just this, and it turns out that the singularities will be at nonzero integral multiples of $2 pi i$ with residues proportional (up to a function of $s$ independent of the integer $n$ in question) to $n^{1-s}$. Summing over these, we formally have a relationship between $zeta(s) Gamma(s)$ and $zeta(1-s)$, which is excitingly close to what we're looking for; in fact filling in the factors we skipped over we get $$zeta(1-s) = 2^{1-s} pi^{-s} cos left( frac {pi s}2 right) zeta(s) Gamma(s).$$ This is a perfectly fine form for a functional equation, but to get $L$ properly you can apply the duplication formula (I think Legendre's, definitely before Riemann). I was going to write out the derivation, but realized that a) it's not terribly relevant to the question and b) I'm bad at algebra, but it can be done.
As to where the initial equation $(*)$ comes from or why it's natural: if you start with the gamma function, by a change of variables you can naturally involve a factor of $n^{-s}$; and at this point to a mathematician of the nineteenth century the obvious thing to do is to sum over $n ge 1$, giving us the formula.
$endgroup$
Skimming the details of the proof since you seem to be familiar with it: the starting point to the classical proof of the functional equation is the expression $$zeta(s) Gamma(s) = int_0^infty frac {t^{s-1}} {e^t-1} , dt. tag{$*$} $$ Already we can see the presence of the gamma function. Unfortunately this still only converges for $text{Re}(s)>1$, so we haven't done any better than our original sum form. However, Riemann had the idea of deforming this into a contour integral $$zeta(s) Gamma(s) = frac i {2 sin (pi s)} int_C frac {(-z)^{s-1}} {e^z-1} , dz,$$ with the contour $C$ coming from positive infinity along the real line slightly in the upper half-plane, circling around the origin in the positive (counterclockwise) direction, and returning to positive infinity along the real line in the lower half-plane. This now looks like something we could evaluate using the residue theorem. In fact the contour as it is presents us with some challenges, but by deforming it suitably we can in fact do just this, and it turns out that the singularities will be at nonzero integral multiples of $2 pi i$ with residues proportional (up to a function of $s$ independent of the integer $n$ in question) to $n^{1-s}$. Summing over these, we formally have a relationship between $zeta(s) Gamma(s)$ and $zeta(1-s)$, which is excitingly close to what we're looking for; in fact filling in the factors we skipped over we get $$zeta(1-s) = 2^{1-s} pi^{-s} cos left( frac {pi s}2 right) zeta(s) Gamma(s).$$ This is a perfectly fine form for a functional equation, but to get $L$ properly you can apply the duplication formula (I think Legendre's, definitely before Riemann). I was going to write out the derivation, but realized that a) it's not terribly relevant to the question and b) I'm bad at algebra, but it can be done.
As to where the initial equation $(*)$ comes from or why it's natural: if you start with the gamma function, by a change of variables you can naturally involve a factor of $n^{-s}$; and at this point to a mathematician of the nineteenth century the obvious thing to do is to sum over $n ge 1$, giving us the formula.
answered Jan 31 at 16:34


LaertesLaertes
1,150414
1,150414
$begingroup$
Indeed Riemann was very talented with complex analysis, contour integrals, residue theorem, probably a little less with Fourier analysis, and he didn't give the Fourier analysis clues for what was happening in his functional equation, density of zeros, explicit formula and Riemann hypothesis. Complex analysis and Fourier analysis both were invented during 1820-1840.
$endgroup$
– reuns
Jan 31 at 17:15
add a comment |
$begingroup$
Indeed Riemann was very talented with complex analysis, contour integrals, residue theorem, probably a little less with Fourier analysis, and he didn't give the Fourier analysis clues for what was happening in his functional equation, density of zeros, explicit formula and Riemann hypothesis. Complex analysis and Fourier analysis both were invented during 1820-1840.
$endgroup$
– reuns
Jan 31 at 17:15
$begingroup$
Indeed Riemann was very talented with complex analysis, contour integrals, residue theorem, probably a little less with Fourier analysis, and he didn't give the Fourier analysis clues for what was happening in his functional equation, density of zeros, explicit formula and Riemann hypothesis. Complex analysis and Fourier analysis both were invented during 1820-1840.
$endgroup$
– reuns
Jan 31 at 17:15
$begingroup$
Indeed Riemann was very talented with complex analysis, contour integrals, residue theorem, probably a little less with Fourier analysis, and he didn't give the Fourier analysis clues for what was happening in his functional equation, density of zeros, explicit formula and Riemann hypothesis. Complex analysis and Fourier analysis both were invented during 1820-1840.
$endgroup$
– reuns
Jan 31 at 17:15
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3094976%2fhow-do-we-know-the-gamma-function-and-riemann-zeta-function-combine-in-such-a-ni%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
The easiest way to start with the functional equation is from the Fourier series $frac12-lfloor x rfloor+x = sum_{n=1}^infty frac{sin(2pi nx)}{pi n} $ obtained from the Taylor series of $-log(1-z)$. Then $zeta(s) = s int_0^infty (frac12-lfloor x rfloor+x)x^{-s-1}dx = sum_{n=1}^infty s int_0^infty frac{sin(2pi nx)}{pi n}x^{-s-1}dx=zeta(1-s) s int_0^inftyfrac{sin(pi x)}{pi}x^{-s-1}dx$. The $Gamma(s/2)$+Poisson summation formula is more natural in the context of modular forms, to explain how it generalizes you'd need Tate's thesis, adeles and places at $infty$
$endgroup$
– reuns
Jan 31 at 15:15