Showing $int_0^{int_0^u{rm sech}vdv}sec vdvequiv u$ and $int_0^{int_0^usec vdv}{rm sech} vdvequiv u$
The two following very weird-looking theorems
$$int_0^{int_0^uoperatorname{sech}upsilon dupsilon}secupsilon dupsilon equiv u$$
$$int_0^{int_0^usecupsilon dupsilon}operatorname{sech}upsilon dupsilon equiv u$$
are simple consequences of the rather remarkable (in my opinion) fact that
$$int_0^uoperatorname{sech}upsilon dupsilon$$
$$int_0^uoperatorname{sec}upsilon dupsilon$$
are inverse functions of each other (specifically
$$2operatorname{atn}exp u-frac{pi}{2}$$&$$lntan(frac{u}{2}+frac{pi}{4})$$
respectively, or $operatorname{gd}^{-1}u$ and $operatorname{gd}u$ if the convention of using "$operatorname{gd}$" to stand for gudermannian be received).
The relations between integrals of circular and hyperbolic functions, and what an extraordinarily tight 'system' they all seem to form is a constant source of fascination to me - and the particular relation cited here is probably the strangest of all, to my mind.
It's probably too much to expect that a group-under-composition of functions could be formed out of them, or anything quite so neat ... but anyway:
The "weird theorem" (really just one theorem, of course) that is the nominal subject of this post: can it be proven directly from the properties of hyperbolic and circular functions and their relations amongst each other, rather than by the crude expedient of just evaluating the integrals and simply exhibiting them as mutual inverses?
Update
I've just been brewing some thoughts, and I can't help thinking now that it might have something to do with the fact that
$$intfrac{dy}{ysqrt{1-y^2}}=operatorname{asech}{y}$$
&
$$intfrac{dy}{ysqrt{y^2-1}}=operatorname{asec}{y} .$$
(I tend to use "y" in these kinds of integral, as when I was shown them the very first time it was by differentiating $y$ = circular or hyperbolic function of $x$ & expressing the result in terms of $y$; & I've just never gotten out of the habit.)
@ Michael Hoppe
Let's see ... they are effectively compositions so
$$operatorname{sech}u.operatorname{sec}int_0^uoperatorname{sech}upsilon dupsilonequiv 1$$
$$operatorname{sec}u.operatorname{sech}int_0^uoperatorname{sec}upsilon dupsilonequiv 1 ... $$
I think that's what they would be in differentiated form. Is there any lead in that?
$$operatorname{sec}int_0^uoperatorname{sech}upsilon dupsilonequiv operatorname{cosh}u cdotcdotcdotcdotcdotcdot(operatorname{i})$$
$$operatorname{sech}int_0^uoperatorname{sec}upsilon dupsilonequiv operatorname{cos}ucdotcdotcdotcdotcdotcdot(operatorname{ii})$$
they certainly look less formidable in that form.
Manipulating
$$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
a little, we get
$$operatorname{sech}u =operatorname{cos}operatorname{atn}operatorname{sinh}ucdotcdotcdotcdotcdotcdot(operatorname{iii})$$
$$operatorname{sec}u =operatorname{cosh}operatorname{asinh}operatorname{tan}ucdotcdotcdotcdotcdotcdot(operatorname{iv})$$
Replacing $operatorname{sec}()$ in (i) with its identity in (iv), & $operatorname{sech}()$ in (ii) with its identity in (iii), we get
$$int_0^uoperatorname{sech}upsilon dupsilonequiv operatorname{atn}operatorname{sinh}u$$
$$int_0^uoperatorname{sec}upsilon dupsilonequiv operatorname{asinh}operatorname{tan}u$$
Looks a bit like going round in circles. No, it's not quite that - it's more like tracing the threads allover the place & observing how wonderfully they join up end-to-end, no matter how crazy an excursion they make. well it does show that the functions in question are indeed inverses - but - we have still done it by solving the integrals ... really. Is this an improvement? I'm not sure. But I have certainly tried to incorporate the various advice that the contibutors have most graciously dispensed. And we've gotten another rather curioferous theorem into the bargain.
$$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [iint_0^uoperatorname{sech}{upsilon.dupsilon} right ]}$$
$$operatorname{sech}{u}= cos{left [int_0^uoperatorname{sech}{upsilon.dupsilon} right ]}$$
$$operatorname{sec}{u}= cosh{left [int_0^uoperatorname{sec}{upsilon.dupsilon} right ]}$$
And it's also quite possible that I've missed the point or lost the plot somewhere along the line!
Nor have I forgotten my line of thought either - the way that if you integrate
$$frac{dy}{ysqrt{1-y^2}}$$
and the range of integration straddles the point $y=1$, it's beautifully incorporated by $1/i = -i$ going outside; and the $operatorname{asec}$ function 'splices' onto the $operatorname{asech}$ - as both have a √ singularity at that point - & turns through a right-angle.
hyperbolic-functions trigonometric-integrals
|
show 4 more comments
The two following very weird-looking theorems
$$int_0^{int_0^uoperatorname{sech}upsilon dupsilon}secupsilon dupsilon equiv u$$
$$int_0^{int_0^usecupsilon dupsilon}operatorname{sech}upsilon dupsilon equiv u$$
are simple consequences of the rather remarkable (in my opinion) fact that
$$int_0^uoperatorname{sech}upsilon dupsilon$$
$$int_0^uoperatorname{sec}upsilon dupsilon$$
are inverse functions of each other (specifically
$$2operatorname{atn}exp u-frac{pi}{2}$$&$$lntan(frac{u}{2}+frac{pi}{4})$$
respectively, or $operatorname{gd}^{-1}u$ and $operatorname{gd}u$ if the convention of using "$operatorname{gd}$" to stand for gudermannian be received).
The relations between integrals of circular and hyperbolic functions, and what an extraordinarily tight 'system' they all seem to form is a constant source of fascination to me - and the particular relation cited here is probably the strangest of all, to my mind.
It's probably too much to expect that a group-under-composition of functions could be formed out of them, or anything quite so neat ... but anyway:
The "weird theorem" (really just one theorem, of course) that is the nominal subject of this post: can it be proven directly from the properties of hyperbolic and circular functions and their relations amongst each other, rather than by the crude expedient of just evaluating the integrals and simply exhibiting them as mutual inverses?
Update
I've just been brewing some thoughts, and I can't help thinking now that it might have something to do with the fact that
$$intfrac{dy}{ysqrt{1-y^2}}=operatorname{asech}{y}$$
&
$$intfrac{dy}{ysqrt{y^2-1}}=operatorname{asec}{y} .$$
(I tend to use "y" in these kinds of integral, as when I was shown them the very first time it was by differentiating $y$ = circular or hyperbolic function of $x$ & expressing the result in terms of $y$; & I've just never gotten out of the habit.)
@ Michael Hoppe
Let's see ... they are effectively compositions so
$$operatorname{sech}u.operatorname{sec}int_0^uoperatorname{sech}upsilon dupsilonequiv 1$$
$$operatorname{sec}u.operatorname{sech}int_0^uoperatorname{sec}upsilon dupsilonequiv 1 ... $$
I think that's what they would be in differentiated form. Is there any lead in that?
$$operatorname{sec}int_0^uoperatorname{sech}upsilon dupsilonequiv operatorname{cosh}u cdotcdotcdotcdotcdotcdot(operatorname{i})$$
$$operatorname{sech}int_0^uoperatorname{sec}upsilon dupsilonequiv operatorname{cos}ucdotcdotcdotcdotcdotcdot(operatorname{ii})$$
they certainly look less formidable in that form.
Manipulating
$$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
a little, we get
$$operatorname{sech}u =operatorname{cos}operatorname{atn}operatorname{sinh}ucdotcdotcdotcdotcdotcdot(operatorname{iii})$$
$$operatorname{sec}u =operatorname{cosh}operatorname{asinh}operatorname{tan}ucdotcdotcdotcdotcdotcdot(operatorname{iv})$$
Replacing $operatorname{sec}()$ in (i) with its identity in (iv), & $operatorname{sech}()$ in (ii) with its identity in (iii), we get
$$int_0^uoperatorname{sech}upsilon dupsilonequiv operatorname{atn}operatorname{sinh}u$$
$$int_0^uoperatorname{sec}upsilon dupsilonequiv operatorname{asinh}operatorname{tan}u$$
Looks a bit like going round in circles. No, it's not quite that - it's more like tracing the threads allover the place & observing how wonderfully they join up end-to-end, no matter how crazy an excursion they make. well it does show that the functions in question are indeed inverses - but - we have still done it by solving the integrals ... really. Is this an improvement? I'm not sure. But I have certainly tried to incorporate the various advice that the contibutors have most graciously dispensed. And we've gotten another rather curioferous theorem into the bargain.
$$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [iint_0^uoperatorname{sech}{upsilon.dupsilon} right ]}$$
$$operatorname{sech}{u}= cos{left [int_0^uoperatorname{sech}{upsilon.dupsilon} right ]}$$
$$operatorname{sec}{u}= cosh{left [int_0^uoperatorname{sec}{upsilon.dupsilon} right ]}$$
And it's also quite possible that I've missed the point or lost the plot somewhere along the line!
Nor have I forgotten my line of thought either - the way that if you integrate
$$frac{dy}{ysqrt{1-y^2}}$$
and the range of integration straddles the point $y=1$, it's beautifully incorporated by $1/i = -i$ going outside; and the $operatorname{asec}$ function 'splices' onto the $operatorname{asech}$ - as both have a √ singularity at that point - & turns through a right-angle.
hyperbolic-functions trigonometric-integrals
Very interesting and would be intriguing to look at other pairs too.
– Richard Martin
Nov 16 '18 at 10:54
You see what I mean then - like in other branches of mathematics - linear algebra being fertile ground for this kind of thing - you prove a theorem not by unpacking the content of whatever items it might be that you wish to relate, but by examining the patterns & structures that such items form when taken together as an ensemble. To me that theorem that I have cited here looks for all the world like something that ought to be showable to proceed in its own right from the very recipe itself for circular & hyperbolic functions.
– AmbretteOrrisey
Nov 16 '18 at 11:28
I think, more to the point, the reason this holds is that $$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
– Ron Gordon
Nov 16 '18 at 14:57
@Ron Gordon -- I'm getting the hang of this site! I first thought "why on earth did you use LateX notation in a field in which it isn't interpreted!!?" ... then I had the brilliant brilliant idea, which I'm sure no-one has ever thought of before, of copying the text, pasting it into an answer field, and summoning a preview!! Thanks for that contribution. I'll examine it ... but I was actually brewing some thoughts of my own that I was actually just about to send in. I'll put it as an edit to the original question ... I get that prerogative, it being my question!
– AmbretteOrrisey
Nov 16 '18 at 15:49
@AmbretteOrrisey: yeah, that's what I do and did here.
– Ron Gordon
Nov 16 '18 at 15:50
|
show 4 more comments
The two following very weird-looking theorems
$$int_0^{int_0^uoperatorname{sech}upsilon dupsilon}secupsilon dupsilon equiv u$$
$$int_0^{int_0^usecupsilon dupsilon}operatorname{sech}upsilon dupsilon equiv u$$
are simple consequences of the rather remarkable (in my opinion) fact that
$$int_0^uoperatorname{sech}upsilon dupsilon$$
$$int_0^uoperatorname{sec}upsilon dupsilon$$
are inverse functions of each other (specifically
$$2operatorname{atn}exp u-frac{pi}{2}$$&$$lntan(frac{u}{2}+frac{pi}{4})$$
respectively, or $operatorname{gd}^{-1}u$ and $operatorname{gd}u$ if the convention of using "$operatorname{gd}$" to stand for gudermannian be received).
The relations between integrals of circular and hyperbolic functions, and what an extraordinarily tight 'system' they all seem to form is a constant source of fascination to me - and the particular relation cited here is probably the strangest of all, to my mind.
It's probably too much to expect that a group-under-composition of functions could be formed out of them, or anything quite so neat ... but anyway:
The "weird theorem" (really just one theorem, of course) that is the nominal subject of this post: can it be proven directly from the properties of hyperbolic and circular functions and their relations amongst each other, rather than by the crude expedient of just evaluating the integrals and simply exhibiting them as mutual inverses?
Update
I've just been brewing some thoughts, and I can't help thinking now that it might have something to do with the fact that
$$intfrac{dy}{ysqrt{1-y^2}}=operatorname{asech}{y}$$
&
$$intfrac{dy}{ysqrt{y^2-1}}=operatorname{asec}{y} .$$
(I tend to use "y" in these kinds of integral, as when I was shown them the very first time it was by differentiating $y$ = circular or hyperbolic function of $x$ & expressing the result in terms of $y$; & I've just never gotten out of the habit.)
@ Michael Hoppe
Let's see ... they are effectively compositions so
$$operatorname{sech}u.operatorname{sec}int_0^uoperatorname{sech}upsilon dupsilonequiv 1$$
$$operatorname{sec}u.operatorname{sech}int_0^uoperatorname{sec}upsilon dupsilonequiv 1 ... $$
I think that's what they would be in differentiated form. Is there any lead in that?
$$operatorname{sec}int_0^uoperatorname{sech}upsilon dupsilonequiv operatorname{cosh}u cdotcdotcdotcdotcdotcdot(operatorname{i})$$
$$operatorname{sech}int_0^uoperatorname{sec}upsilon dupsilonequiv operatorname{cos}ucdotcdotcdotcdotcdotcdot(operatorname{ii})$$
they certainly look less formidable in that form.
Manipulating
$$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
a little, we get
$$operatorname{sech}u =operatorname{cos}operatorname{atn}operatorname{sinh}ucdotcdotcdotcdotcdotcdot(operatorname{iii})$$
$$operatorname{sec}u =operatorname{cosh}operatorname{asinh}operatorname{tan}ucdotcdotcdotcdotcdotcdot(operatorname{iv})$$
Replacing $operatorname{sec}()$ in (i) with its identity in (iv), & $operatorname{sech}()$ in (ii) with its identity in (iii), we get
$$int_0^uoperatorname{sech}upsilon dupsilonequiv operatorname{atn}operatorname{sinh}u$$
$$int_0^uoperatorname{sec}upsilon dupsilonequiv operatorname{asinh}operatorname{tan}u$$
Looks a bit like going round in circles. No, it's not quite that - it's more like tracing the threads allover the place & observing how wonderfully they join up end-to-end, no matter how crazy an excursion they make. well it does show that the functions in question are indeed inverses - but - we have still done it by solving the integrals ... really. Is this an improvement? I'm not sure. But I have certainly tried to incorporate the various advice that the contibutors have most graciously dispensed. And we've gotten another rather curioferous theorem into the bargain.
$$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [iint_0^uoperatorname{sech}{upsilon.dupsilon} right ]}$$
$$operatorname{sech}{u}= cos{left [int_0^uoperatorname{sech}{upsilon.dupsilon} right ]}$$
$$operatorname{sec}{u}= cosh{left [int_0^uoperatorname{sec}{upsilon.dupsilon} right ]}$$
And it's also quite possible that I've missed the point or lost the plot somewhere along the line!
Nor have I forgotten my line of thought either - the way that if you integrate
$$frac{dy}{ysqrt{1-y^2}}$$
and the range of integration straddles the point $y=1$, it's beautifully incorporated by $1/i = -i$ going outside; and the $operatorname{asec}$ function 'splices' onto the $operatorname{asech}$ - as both have a √ singularity at that point - & turns through a right-angle.
hyperbolic-functions trigonometric-integrals
The two following very weird-looking theorems
$$int_0^{int_0^uoperatorname{sech}upsilon dupsilon}secupsilon dupsilon equiv u$$
$$int_0^{int_0^usecupsilon dupsilon}operatorname{sech}upsilon dupsilon equiv u$$
are simple consequences of the rather remarkable (in my opinion) fact that
$$int_0^uoperatorname{sech}upsilon dupsilon$$
$$int_0^uoperatorname{sec}upsilon dupsilon$$
are inverse functions of each other (specifically
$$2operatorname{atn}exp u-frac{pi}{2}$$&$$lntan(frac{u}{2}+frac{pi}{4})$$
respectively, or $operatorname{gd}^{-1}u$ and $operatorname{gd}u$ if the convention of using "$operatorname{gd}$" to stand for gudermannian be received).
The relations between integrals of circular and hyperbolic functions, and what an extraordinarily tight 'system' they all seem to form is a constant source of fascination to me - and the particular relation cited here is probably the strangest of all, to my mind.
It's probably too much to expect that a group-under-composition of functions could be formed out of them, or anything quite so neat ... but anyway:
The "weird theorem" (really just one theorem, of course) that is the nominal subject of this post: can it be proven directly from the properties of hyperbolic and circular functions and their relations amongst each other, rather than by the crude expedient of just evaluating the integrals and simply exhibiting them as mutual inverses?
Update
I've just been brewing some thoughts, and I can't help thinking now that it might have something to do with the fact that
$$intfrac{dy}{ysqrt{1-y^2}}=operatorname{asech}{y}$$
&
$$intfrac{dy}{ysqrt{y^2-1}}=operatorname{asec}{y} .$$
(I tend to use "y" in these kinds of integral, as when I was shown them the very first time it was by differentiating $y$ = circular or hyperbolic function of $x$ & expressing the result in terms of $y$; & I've just never gotten out of the habit.)
@ Michael Hoppe
Let's see ... they are effectively compositions so
$$operatorname{sech}u.operatorname{sec}int_0^uoperatorname{sech}upsilon dupsilonequiv 1$$
$$operatorname{sec}u.operatorname{sech}int_0^uoperatorname{sec}upsilon dupsilonequiv 1 ... $$
I think that's what they would be in differentiated form. Is there any lead in that?
$$operatorname{sec}int_0^uoperatorname{sech}upsilon dupsilonequiv operatorname{cosh}u cdotcdotcdotcdotcdotcdot(operatorname{i})$$
$$operatorname{sech}int_0^uoperatorname{sec}upsilon dupsilonequiv operatorname{cos}ucdotcdotcdotcdotcdotcdot(operatorname{ii})$$
they certainly look less formidable in that form.
Manipulating
$$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
a little, we get
$$operatorname{sech}u =operatorname{cos}operatorname{atn}operatorname{sinh}ucdotcdotcdotcdotcdotcdot(operatorname{iii})$$
$$operatorname{sec}u =operatorname{cosh}operatorname{asinh}operatorname{tan}ucdotcdotcdotcdotcdotcdot(operatorname{iv})$$
Replacing $operatorname{sec}()$ in (i) with its identity in (iv), & $operatorname{sech}()$ in (ii) with its identity in (iii), we get
$$int_0^uoperatorname{sech}upsilon dupsilonequiv operatorname{atn}operatorname{sinh}u$$
$$int_0^uoperatorname{sec}upsilon dupsilonequiv operatorname{asinh}operatorname{tan}u$$
Looks a bit like going round in circles. No, it's not quite that - it's more like tracing the threads allover the place & observing how wonderfully they join up end-to-end, no matter how crazy an excursion they make. well it does show that the functions in question are indeed inverses - but - we have still done it by solving the integrals ... really. Is this an improvement? I'm not sure. But I have certainly tried to incorporate the various advice that the contibutors have most graciously dispensed. And we've gotten another rather curioferous theorem into the bargain.
$$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [iint_0^uoperatorname{sech}{upsilon.dupsilon} right ]}$$
$$operatorname{sech}{u}= cos{left [int_0^uoperatorname{sech}{upsilon.dupsilon} right ]}$$
$$operatorname{sec}{u}= cosh{left [int_0^uoperatorname{sec}{upsilon.dupsilon} right ]}$$
And it's also quite possible that I've missed the point or lost the plot somewhere along the line!
Nor have I forgotten my line of thought either - the way that if you integrate
$$frac{dy}{ysqrt{1-y^2}}$$
and the range of integration straddles the point $y=1$, it's beautifully incorporated by $1/i = -i$ going outside; and the $operatorname{asec}$ function 'splices' onto the $operatorname{asech}$ - as both have a √ singularity at that point - & turns through a right-angle.
hyperbolic-functions trigonometric-integrals
hyperbolic-functions trigonometric-integrals
edited Nov 16 '18 at 19:59
asked Nov 16 '18 at 10:32
AmbretteOrrisey
57410
57410
Very interesting and would be intriguing to look at other pairs too.
– Richard Martin
Nov 16 '18 at 10:54
You see what I mean then - like in other branches of mathematics - linear algebra being fertile ground for this kind of thing - you prove a theorem not by unpacking the content of whatever items it might be that you wish to relate, but by examining the patterns & structures that such items form when taken together as an ensemble. To me that theorem that I have cited here looks for all the world like something that ought to be showable to proceed in its own right from the very recipe itself for circular & hyperbolic functions.
– AmbretteOrrisey
Nov 16 '18 at 11:28
I think, more to the point, the reason this holds is that $$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
– Ron Gordon
Nov 16 '18 at 14:57
@Ron Gordon -- I'm getting the hang of this site! I first thought "why on earth did you use LateX notation in a field in which it isn't interpreted!!?" ... then I had the brilliant brilliant idea, which I'm sure no-one has ever thought of before, of copying the text, pasting it into an answer field, and summoning a preview!! Thanks for that contribution. I'll examine it ... but I was actually brewing some thoughts of my own that I was actually just about to send in. I'll put it as an edit to the original question ... I get that prerogative, it being my question!
– AmbretteOrrisey
Nov 16 '18 at 15:49
@AmbretteOrrisey: yeah, that's what I do and did here.
– Ron Gordon
Nov 16 '18 at 15:50
|
show 4 more comments
Very interesting and would be intriguing to look at other pairs too.
– Richard Martin
Nov 16 '18 at 10:54
You see what I mean then - like in other branches of mathematics - linear algebra being fertile ground for this kind of thing - you prove a theorem not by unpacking the content of whatever items it might be that you wish to relate, but by examining the patterns & structures that such items form when taken together as an ensemble. To me that theorem that I have cited here looks for all the world like something that ought to be showable to proceed in its own right from the very recipe itself for circular & hyperbolic functions.
– AmbretteOrrisey
Nov 16 '18 at 11:28
I think, more to the point, the reason this holds is that $$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
– Ron Gordon
Nov 16 '18 at 14:57
@Ron Gordon -- I'm getting the hang of this site! I first thought "why on earth did you use LateX notation in a field in which it isn't interpreted!!?" ... then I had the brilliant brilliant idea, which I'm sure no-one has ever thought of before, of copying the text, pasting it into an answer field, and summoning a preview!! Thanks for that contribution. I'll examine it ... but I was actually brewing some thoughts of my own that I was actually just about to send in. I'll put it as an edit to the original question ... I get that prerogative, it being my question!
– AmbretteOrrisey
Nov 16 '18 at 15:49
@AmbretteOrrisey: yeah, that's what I do and did here.
– Ron Gordon
Nov 16 '18 at 15:50
Very interesting and would be intriguing to look at other pairs too.
– Richard Martin
Nov 16 '18 at 10:54
Very interesting and would be intriguing to look at other pairs too.
– Richard Martin
Nov 16 '18 at 10:54
You see what I mean then - like in other branches of mathematics - linear algebra being fertile ground for this kind of thing - you prove a theorem not by unpacking the content of whatever items it might be that you wish to relate, but by examining the patterns & structures that such items form when taken together as an ensemble. To me that theorem that I have cited here looks for all the world like something that ought to be showable to proceed in its own right from the very recipe itself for circular & hyperbolic functions.
– AmbretteOrrisey
Nov 16 '18 at 11:28
You see what I mean then - like in other branches of mathematics - linear algebra being fertile ground for this kind of thing - you prove a theorem not by unpacking the content of whatever items it might be that you wish to relate, but by examining the patterns & structures that such items form when taken together as an ensemble. To me that theorem that I have cited here looks for all the world like something that ought to be showable to proceed in its own right from the very recipe itself for circular & hyperbolic functions.
– AmbretteOrrisey
Nov 16 '18 at 11:28
I think, more to the point, the reason this holds is that $$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
– Ron Gordon
Nov 16 '18 at 14:57
I think, more to the point, the reason this holds is that $$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
– Ron Gordon
Nov 16 '18 at 14:57
@Ron Gordon -- I'm getting the hang of this site! I first thought "why on earth did you use LateX notation in a field in which it isn't interpreted!!?" ... then I had the brilliant brilliant idea, which I'm sure no-one has ever thought of before, of copying the text, pasting it into an answer field, and summoning a preview!! Thanks for that contribution. I'll examine it ... but I was actually brewing some thoughts of my own that I was actually just about to send in. I'll put it as an edit to the original question ... I get that prerogative, it being my question!
– AmbretteOrrisey
Nov 16 '18 at 15:49
@Ron Gordon -- I'm getting the hang of this site! I first thought "why on earth did you use LateX notation in a field in which it isn't interpreted!!?" ... then I had the brilliant brilliant idea, which I'm sure no-one has ever thought of before, of copying the text, pasting it into an answer field, and summoning a preview!! Thanks for that contribution. I'll examine it ... but I was actually brewing some thoughts of my own that I was actually just about to send in. I'll put it as an edit to the original question ... I get that prerogative, it being my question!
– AmbretteOrrisey
Nov 16 '18 at 15:49
@AmbretteOrrisey: yeah, that's what I do and did here.
– Ron Gordon
Nov 16 '18 at 15:50
@AmbretteOrrisey: yeah, that's what I do and did here.
– Ron Gordon
Nov 16 '18 at 15:50
|
show 4 more comments
2 Answers
2
active
oldest
votes
I've always loved the Gudermannian function. The least mystifying way to start any discussion of it is to note that the functions $f(t):=frac{2t}{1-t^2},,g(t):=frac{1-t^2}{1+t^2}$ satisfy not only $f(tantfrac{x}{2})=sin x,,g(tantfrac{x}{2})=cos x$, but also $f(tanhtfrac{x}{2})=tanh x,,g(tanhtfrac{x}{2})=operatorname{sech}x$. From this it follows that the definition $operatorname{gd}x:=2arctantanhfrac{x}{2}$ satisfies results such as $tantfrac{operatorname{gd}x}{2}=tanhfrac{x}{2},,sin operatorname{gd}x=tanh x$ etc.
Now for the integrals. Note that $tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}x=frac{operatorname{sech}^2frac{x}{2}}{1+tanh^2frac{x}{2}}=operatorname{sech}x$. But to go from the Gudermannian to its inverse, all I have to do is swap the two kinds of "tangent". So look what happens to the derivative: $$tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}^{-1}x=frac{sec^2frac{x}{2}}{1-tan^2frac{x}{2}}=sec x.$$You know that sign difference when you compare $1=cos^2tfrac{x}{2}+sin^2tfrac{x}{2}$ to $1=cosh^2tfrac{x}{2}-sinh^2tfrac{x}{2}$, or $cos x=cos^2tfrac{x}{2}-sin^2tfrac{x}{2}$ to $cosh x=cosh^2tfrac{x}{2}+sinh^2tfrac{x}{2}$? It perfectly balances the sign difference when comparing $y=tan ximplies y'=1+y^2$ to $y=tanh ximplies y'=1-y^2$, i.e. comparing $tfrac{operatorname{d}}{operatorname{d}x}tan x=tfrac{1}{1+x^2}$ to $tfrac{operatorname{d}}{operatorname{d}x}tanh x=tfrac{1}{1-x^2}$. In fact, it causes that result about derivatives.
It's a kind of dense vast web of symmetries; and there's probably prettymuch unlimited scope for spinning new threads. I sometimes wonder whether some kind of algebraic structure could be forged out of it - something a bit like a group maybe. I'll leave actually doing that to the serious heavyweights, though ... if they think there's any mileage in it atall! And the gudermannian function ... it is lovable! ... and people are looking at a depiction of it all over the place & every day in the Mercator projection. Another one I recently discovered is sl - sn for k=i.
– AmbretteOrrisey
Nov 20 '18 at 20:28
@AmbrettrOrrisey The inverse is called the Lambertian in the context of Mercator projection. Proving its conjectured value was a major unsolved problem in the mid-16th century. Of course, is they'd known the FTC, then it would have been trivial. I've always been disappointed no-one seems to have access to the very first proof of the integral from 1668.
– J.G.
Nov 20 '18 at 20:46
Sometimes in these archaic proofs they effectively do use integral calculus without actually broaching it explicitly as a particular reasoning process in it's own right. This is so in the case of the proof by ... was it Archimedes? o gf
– AmbretteOrrisey
Nov 20 '18 at 20:54
of the formula for the volume of a a sphere, with the 4/3 factor entering in. ¶ I didn't know the inverse of the Gudermannian was canonised & named after Lambert (who of course was a cartographer, there being a few projections named after him ... assuming it's the same Lambert, which is pretty safe, I think!). Yep thanks for that instruction. For sure, it was hard work in those days, without the differential & integral calculus!
– AmbretteOrrisey
Nov 20 '18 at 20:59
add a comment |
I've managed to arrive at some degree of generalisation of this - infact through that observation that $operatorname{asech}$ & $operatorname{asec}$ are both integrals of 'complementary' functions - complementary in the sense of having occurences of $1-x$ replaced with $x-1$ as the point $x=1$ is traversed. An observation that has arisen through this little exercise is that say we have a function $operatorname{f}$, and we integrate it to get the primitive of $operatorname{f}$, say $operatorname{g}$, and then we take the inverse of that, say $operatorname{ag}$, and integrate that, it is actually equivalent to integrating $operatorname{f}$ multiplied by the ordinate. This is easy to see on a graph: the integral just described is the area enclosed under the curve cut-off by the horizontal line joining (0, $operatorname{f}(x)$) to ($x$, $operatorname{f}(x)$), which is
$$int x.d(g(x)) = int xf(x).dx .$$
. In this particular case and considering $0leq xleq 1$, we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{1-upsilon^2}}=operatorname{asech}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{1-upsilon^2}}=operatorname{acos}(x), $$
and (using $y$, to emphasise that we are switching to viewlng the integration along the $y$ axis)
$$int_0^yoperatorname{sech}upsilon.dupsilon=operatorname{acos}x=operatorname{acos}(operatorname{sech}y) ,$$
which is indeed one of the formulæ arrived at in the main body of the question.
Likewise for $xgeq 1$ we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{upsilon^2-1}}=operatorname{asec}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{upsilon^2-1}}=operatorname{acosh}(x), $$
and (using $y$, to emphasise ... again)
$$int_0^yoperatorname{sec}upsilon.dupsilon=operatorname{acosh}x=operatorname{acosh}(operatorname{sec}y) ,$$
Which is the corresponding formula arrived at in the main body.
It can be seen with a little consideration that the being inverses of each other of the two integrals in $y$ hinges on its being so that the functions
$$operatorname{acosh} & operatorname{asech}$$
are functions $operatorname{phi} & operatorname{psi}$ such that
$$operatorname{phi}(x)equivoperatorname{psi}(1/x) ;$$
and likewise for
$$operatorname{acos} & operatorname{asec} .$$
Translating this into terms of the origin of these functions as integrals of
$$operatorname{f_l}(x)equivfrac{1}{xsqrt{1-x^2}} &$$
$$operatorname{f_r}(x)equivfrac{1}{xsqrt{x^2-1}}, $$
(l=left & b=right), that which is what it hinges on is that
$$operatorname{f_a}(x).dxequiv (1/x)operatorname{f_b}(1/x) &$$
$$operatorname{f_a}(1/x).d(1/x)equiv xoperatorname{f_b}(x), $$
where a=l or r & b=r or l. This is very fiddly to put into symbols: but basically what it hinges on is that if $1/x$ be substituted for $x$ in $operatorname{f_l}.dx$ then $x.operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_l}.dx$ then $operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $operatorname{f_r}.dx$ then $x.operatorname{f_l}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_r}.dx$ then $operatorname{f_l}.dx$ should be obtained: basically just thorough reciprocity under those transformations.
It can also be seen that this happens if the function $operatorname{f}$ is of the form reciprocal of ($x$×function that changes $1-x$ into $x-1$ or $x-1$ into $1-x$ when $1/x$ is substituted for $x$ and it is multiplied by $x$ from $d(1/x)=-dx/x^2$). As I said, this is terribly fiddly to put into words & symbols; but the underlying idea is really quite an elementary one.
So we could construct pairs of functions of the following form:
$$frac{1}{x((1-x)^k.(1+x)^{n-k})^{1/n}} $$
&
$$frac{1}{x((x-1)^k.(1+x)^{n-k})^{1/n}} ,$$
where $n$ is a natural number, and $k$ is one of the Euler totient set of $n$; and the functions obtained by taking the primitives of these ought to have this reciprocal property we have been discussing of the integrals of their inverses being mutual inverses.
It actually works beautifully for the case n=1, even though that results in integrals that have infinities; and it's clear that it works for n=2, as that is the case that prompted this post in the first place. As for higher values of n ... anything > 2 results in thoroughly diabolickal hypergeometric functions that have who-knows-what inverses, and I shall have to feed them into some kind of mathematics package to test them for this property. It's actually not so bad though for mutual inverses (a bit like it's not so bad testing whether two numbers are coprime): you don't have to actually compute the inverses - you can just feed one into the other as an argument & see whether the identity function drops out.
For n=1, the pair is
$$-ln(e^{-y}+1)$$
&
$$-ln(e^{-y}-1) .$$
The former is from
$$frac{-1}{x(1-x)}$$
... integral ...
$$lnfrac{1-x}{x}$$
... inverse ...
$$frac{1}{1+e^y}$$
... integral ...
$$-ln(e^{-y}+1) ;$$
& the latter from
$$frac{1}{x(x-1)}$$
... integral ...
$$lnfrac{x-1}{x}$$
... inverse ...
$$frac{1}{1-e^y}$$
... integral ...
$$-ln(e^{-y}-1) .$$
This isn't as much of a generalisation as I was hoping for; but it does at least exhibit the original pair of functions being queried as being part of some kind of pattern, and not just an isolated special case that just happens to have that property because it does!
The functions gotten in $n=1$ régime are worthy of some comment, I think, on various grounds. For one thing, it will be observed that in this case $operatorname{f_l} & operatorname{f_r}$ are not actually different atall! However ... the integrals are different functions. And also we get infinities in this case, but fortunately it doesn't foil the reasoning. If we look at the whole matter graphically, we see that in our familiar $n=2$ case, the two integrals $operatorname{g_l} & operatorname{g_r}$ spring from the point (1,0) square-root-wise; whereas the functions under $n=1$ proceed asymptotically from (1, -∞); and for $n>1$ they will proceed from the point (1,0) with behaviour tending to linearity - ^1-1/n, to be more precise - so the vertical & then curving down section will get progressively tighter with increasing $n$. This brings to the next point: the functions for $n=1$ positively abound in negativity; normally I would want to translate the whole affair into a region better-populated by positive numbers ... but I think in this case they are better left as they are, as they better-exhibit that continuity & progression of behaviour just described. It applies also to the functions that constitute the final result: like any bona-fide self-respecting mutual inverses they ought to be mutual refections in the line $x=y$; in the $n=2$ case they spring from the origin & 'splay-out' to become asypmtotic one to the line $x=pi/2$ the other to the line $y=pi/2$, whereas in $n=1$ case they approach asymptotically from (-∞, -∞) along the negative branch of the identity-function graph, and splay-out to become asymptotic, one to the $x$-axis, the other to the $y$, on the negative faces of them: this might incur that most of the affair is in terms of negative numbers ... but it the better shows that continuity of evolution of behaviour with increasing $n$.
Images for Case $n=2$ (the original subject of the post)
The upper diagram shows the functions $y=operatorname{asech}x$ & $y=operatorname{asec}x$, and the lower the integrals of these along the $y$-axis $x=operatorname{acos}operatorname{sech}y$ & $x=operatorname{acosh}operatorname{sec}y$.
Images for Case $n=1$
The upper diagram shows the functions $y=operatorname{ln}((1-x)/x)$ & $y=operatorname{ln}((x-1)/x)$, and the lower the integrals of these along the $y$-axis $x=-operatorname{ln}(e^{-y}+1)$ & $x=-operatorname{ln}(e^{-y}-1)$.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3000987%2fshowing-int-0-int-0u-rm-sechvdv-sec-vdv-equiv-u-and-int-0-int-0u-s%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
I've always loved the Gudermannian function. The least mystifying way to start any discussion of it is to note that the functions $f(t):=frac{2t}{1-t^2},,g(t):=frac{1-t^2}{1+t^2}$ satisfy not only $f(tantfrac{x}{2})=sin x,,g(tantfrac{x}{2})=cos x$, but also $f(tanhtfrac{x}{2})=tanh x,,g(tanhtfrac{x}{2})=operatorname{sech}x$. From this it follows that the definition $operatorname{gd}x:=2arctantanhfrac{x}{2}$ satisfies results such as $tantfrac{operatorname{gd}x}{2}=tanhfrac{x}{2},,sin operatorname{gd}x=tanh x$ etc.
Now for the integrals. Note that $tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}x=frac{operatorname{sech}^2frac{x}{2}}{1+tanh^2frac{x}{2}}=operatorname{sech}x$. But to go from the Gudermannian to its inverse, all I have to do is swap the two kinds of "tangent". So look what happens to the derivative: $$tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}^{-1}x=frac{sec^2frac{x}{2}}{1-tan^2frac{x}{2}}=sec x.$$You know that sign difference when you compare $1=cos^2tfrac{x}{2}+sin^2tfrac{x}{2}$ to $1=cosh^2tfrac{x}{2}-sinh^2tfrac{x}{2}$, or $cos x=cos^2tfrac{x}{2}-sin^2tfrac{x}{2}$ to $cosh x=cosh^2tfrac{x}{2}+sinh^2tfrac{x}{2}$? It perfectly balances the sign difference when comparing $y=tan ximplies y'=1+y^2$ to $y=tanh ximplies y'=1-y^2$, i.e. comparing $tfrac{operatorname{d}}{operatorname{d}x}tan x=tfrac{1}{1+x^2}$ to $tfrac{operatorname{d}}{operatorname{d}x}tanh x=tfrac{1}{1-x^2}$. In fact, it causes that result about derivatives.
It's a kind of dense vast web of symmetries; and there's probably prettymuch unlimited scope for spinning new threads. I sometimes wonder whether some kind of algebraic structure could be forged out of it - something a bit like a group maybe. I'll leave actually doing that to the serious heavyweights, though ... if they think there's any mileage in it atall! And the gudermannian function ... it is lovable! ... and people are looking at a depiction of it all over the place & every day in the Mercator projection. Another one I recently discovered is sl - sn for k=i.
– AmbretteOrrisey
Nov 20 '18 at 20:28
@AmbrettrOrrisey The inverse is called the Lambertian in the context of Mercator projection. Proving its conjectured value was a major unsolved problem in the mid-16th century. Of course, is they'd known the FTC, then it would have been trivial. I've always been disappointed no-one seems to have access to the very first proof of the integral from 1668.
– J.G.
Nov 20 '18 at 20:46
Sometimes in these archaic proofs they effectively do use integral calculus without actually broaching it explicitly as a particular reasoning process in it's own right. This is so in the case of the proof by ... was it Archimedes? o gf
– AmbretteOrrisey
Nov 20 '18 at 20:54
of the formula for the volume of a a sphere, with the 4/3 factor entering in. ¶ I didn't know the inverse of the Gudermannian was canonised & named after Lambert (who of course was a cartographer, there being a few projections named after him ... assuming it's the same Lambert, which is pretty safe, I think!). Yep thanks for that instruction. For sure, it was hard work in those days, without the differential & integral calculus!
– AmbretteOrrisey
Nov 20 '18 at 20:59
add a comment |
I've always loved the Gudermannian function. The least mystifying way to start any discussion of it is to note that the functions $f(t):=frac{2t}{1-t^2},,g(t):=frac{1-t^2}{1+t^2}$ satisfy not only $f(tantfrac{x}{2})=sin x,,g(tantfrac{x}{2})=cos x$, but also $f(tanhtfrac{x}{2})=tanh x,,g(tanhtfrac{x}{2})=operatorname{sech}x$. From this it follows that the definition $operatorname{gd}x:=2arctantanhfrac{x}{2}$ satisfies results such as $tantfrac{operatorname{gd}x}{2}=tanhfrac{x}{2},,sin operatorname{gd}x=tanh x$ etc.
Now for the integrals. Note that $tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}x=frac{operatorname{sech}^2frac{x}{2}}{1+tanh^2frac{x}{2}}=operatorname{sech}x$. But to go from the Gudermannian to its inverse, all I have to do is swap the two kinds of "tangent". So look what happens to the derivative: $$tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}^{-1}x=frac{sec^2frac{x}{2}}{1-tan^2frac{x}{2}}=sec x.$$You know that sign difference when you compare $1=cos^2tfrac{x}{2}+sin^2tfrac{x}{2}$ to $1=cosh^2tfrac{x}{2}-sinh^2tfrac{x}{2}$, or $cos x=cos^2tfrac{x}{2}-sin^2tfrac{x}{2}$ to $cosh x=cosh^2tfrac{x}{2}+sinh^2tfrac{x}{2}$? It perfectly balances the sign difference when comparing $y=tan ximplies y'=1+y^2$ to $y=tanh ximplies y'=1-y^2$, i.e. comparing $tfrac{operatorname{d}}{operatorname{d}x}tan x=tfrac{1}{1+x^2}$ to $tfrac{operatorname{d}}{operatorname{d}x}tanh x=tfrac{1}{1-x^2}$. In fact, it causes that result about derivatives.
It's a kind of dense vast web of symmetries; and there's probably prettymuch unlimited scope for spinning new threads. I sometimes wonder whether some kind of algebraic structure could be forged out of it - something a bit like a group maybe. I'll leave actually doing that to the serious heavyweights, though ... if they think there's any mileage in it atall! And the gudermannian function ... it is lovable! ... and people are looking at a depiction of it all over the place & every day in the Mercator projection. Another one I recently discovered is sl - sn for k=i.
– AmbretteOrrisey
Nov 20 '18 at 20:28
@AmbrettrOrrisey The inverse is called the Lambertian in the context of Mercator projection. Proving its conjectured value was a major unsolved problem in the mid-16th century. Of course, is they'd known the FTC, then it would have been trivial. I've always been disappointed no-one seems to have access to the very first proof of the integral from 1668.
– J.G.
Nov 20 '18 at 20:46
Sometimes in these archaic proofs they effectively do use integral calculus without actually broaching it explicitly as a particular reasoning process in it's own right. This is so in the case of the proof by ... was it Archimedes? o gf
– AmbretteOrrisey
Nov 20 '18 at 20:54
of the formula for the volume of a a sphere, with the 4/3 factor entering in. ¶ I didn't know the inverse of the Gudermannian was canonised & named after Lambert (who of course was a cartographer, there being a few projections named after him ... assuming it's the same Lambert, which is pretty safe, I think!). Yep thanks for that instruction. For sure, it was hard work in those days, without the differential & integral calculus!
– AmbretteOrrisey
Nov 20 '18 at 20:59
add a comment |
I've always loved the Gudermannian function. The least mystifying way to start any discussion of it is to note that the functions $f(t):=frac{2t}{1-t^2},,g(t):=frac{1-t^2}{1+t^2}$ satisfy not only $f(tantfrac{x}{2})=sin x,,g(tantfrac{x}{2})=cos x$, but also $f(tanhtfrac{x}{2})=tanh x,,g(tanhtfrac{x}{2})=operatorname{sech}x$. From this it follows that the definition $operatorname{gd}x:=2arctantanhfrac{x}{2}$ satisfies results such as $tantfrac{operatorname{gd}x}{2}=tanhfrac{x}{2},,sin operatorname{gd}x=tanh x$ etc.
Now for the integrals. Note that $tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}x=frac{operatorname{sech}^2frac{x}{2}}{1+tanh^2frac{x}{2}}=operatorname{sech}x$. But to go from the Gudermannian to its inverse, all I have to do is swap the two kinds of "tangent". So look what happens to the derivative: $$tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}^{-1}x=frac{sec^2frac{x}{2}}{1-tan^2frac{x}{2}}=sec x.$$You know that sign difference when you compare $1=cos^2tfrac{x}{2}+sin^2tfrac{x}{2}$ to $1=cosh^2tfrac{x}{2}-sinh^2tfrac{x}{2}$, or $cos x=cos^2tfrac{x}{2}-sin^2tfrac{x}{2}$ to $cosh x=cosh^2tfrac{x}{2}+sinh^2tfrac{x}{2}$? It perfectly balances the sign difference when comparing $y=tan ximplies y'=1+y^2$ to $y=tanh ximplies y'=1-y^2$, i.e. comparing $tfrac{operatorname{d}}{operatorname{d}x}tan x=tfrac{1}{1+x^2}$ to $tfrac{operatorname{d}}{operatorname{d}x}tanh x=tfrac{1}{1-x^2}$. In fact, it causes that result about derivatives.
I've always loved the Gudermannian function. The least mystifying way to start any discussion of it is to note that the functions $f(t):=frac{2t}{1-t^2},,g(t):=frac{1-t^2}{1+t^2}$ satisfy not only $f(tantfrac{x}{2})=sin x,,g(tantfrac{x}{2})=cos x$, but also $f(tanhtfrac{x}{2})=tanh x,,g(tanhtfrac{x}{2})=operatorname{sech}x$. From this it follows that the definition $operatorname{gd}x:=2arctantanhfrac{x}{2}$ satisfies results such as $tantfrac{operatorname{gd}x}{2}=tanhfrac{x}{2},,sin operatorname{gd}x=tanh x$ etc.
Now for the integrals. Note that $tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}x=frac{operatorname{sech}^2frac{x}{2}}{1+tanh^2frac{x}{2}}=operatorname{sech}x$. But to go from the Gudermannian to its inverse, all I have to do is swap the two kinds of "tangent". So look what happens to the derivative: $$tfrac{operatorname{d}}{operatorname{d}x}operatorname{gd}^{-1}x=frac{sec^2frac{x}{2}}{1-tan^2frac{x}{2}}=sec x.$$You know that sign difference when you compare $1=cos^2tfrac{x}{2}+sin^2tfrac{x}{2}$ to $1=cosh^2tfrac{x}{2}-sinh^2tfrac{x}{2}$, or $cos x=cos^2tfrac{x}{2}-sin^2tfrac{x}{2}$ to $cosh x=cosh^2tfrac{x}{2}+sinh^2tfrac{x}{2}$? It perfectly balances the sign difference when comparing $y=tan ximplies y'=1+y^2$ to $y=tanh ximplies y'=1-y^2$, i.e. comparing $tfrac{operatorname{d}}{operatorname{d}x}tan x=tfrac{1}{1+x^2}$ to $tfrac{operatorname{d}}{operatorname{d}x}tanh x=tfrac{1}{1-x^2}$. In fact, it causes that result about derivatives.
answered Nov 20 '18 at 20:15
J.G.
23k22137
23k22137
It's a kind of dense vast web of symmetries; and there's probably prettymuch unlimited scope for spinning new threads. I sometimes wonder whether some kind of algebraic structure could be forged out of it - something a bit like a group maybe. I'll leave actually doing that to the serious heavyweights, though ... if they think there's any mileage in it atall! And the gudermannian function ... it is lovable! ... and people are looking at a depiction of it all over the place & every day in the Mercator projection. Another one I recently discovered is sl - sn for k=i.
– AmbretteOrrisey
Nov 20 '18 at 20:28
@AmbrettrOrrisey The inverse is called the Lambertian in the context of Mercator projection. Proving its conjectured value was a major unsolved problem in the mid-16th century. Of course, is they'd known the FTC, then it would have been trivial. I've always been disappointed no-one seems to have access to the very first proof of the integral from 1668.
– J.G.
Nov 20 '18 at 20:46
Sometimes in these archaic proofs they effectively do use integral calculus without actually broaching it explicitly as a particular reasoning process in it's own right. This is so in the case of the proof by ... was it Archimedes? o gf
– AmbretteOrrisey
Nov 20 '18 at 20:54
of the formula for the volume of a a sphere, with the 4/3 factor entering in. ¶ I didn't know the inverse of the Gudermannian was canonised & named after Lambert (who of course was a cartographer, there being a few projections named after him ... assuming it's the same Lambert, which is pretty safe, I think!). Yep thanks for that instruction. For sure, it was hard work in those days, without the differential & integral calculus!
– AmbretteOrrisey
Nov 20 '18 at 20:59
add a comment |
It's a kind of dense vast web of symmetries; and there's probably prettymuch unlimited scope for spinning new threads. I sometimes wonder whether some kind of algebraic structure could be forged out of it - something a bit like a group maybe. I'll leave actually doing that to the serious heavyweights, though ... if they think there's any mileage in it atall! And the gudermannian function ... it is lovable! ... and people are looking at a depiction of it all over the place & every day in the Mercator projection. Another one I recently discovered is sl - sn for k=i.
– AmbretteOrrisey
Nov 20 '18 at 20:28
@AmbrettrOrrisey The inverse is called the Lambertian in the context of Mercator projection. Proving its conjectured value was a major unsolved problem in the mid-16th century. Of course, is they'd known the FTC, then it would have been trivial. I've always been disappointed no-one seems to have access to the very first proof of the integral from 1668.
– J.G.
Nov 20 '18 at 20:46
Sometimes in these archaic proofs they effectively do use integral calculus without actually broaching it explicitly as a particular reasoning process in it's own right. This is so in the case of the proof by ... was it Archimedes? o gf
– AmbretteOrrisey
Nov 20 '18 at 20:54
of the formula for the volume of a a sphere, with the 4/3 factor entering in. ¶ I didn't know the inverse of the Gudermannian was canonised & named after Lambert (who of course was a cartographer, there being a few projections named after him ... assuming it's the same Lambert, which is pretty safe, I think!). Yep thanks for that instruction. For sure, it was hard work in those days, without the differential & integral calculus!
– AmbretteOrrisey
Nov 20 '18 at 20:59
It's a kind of dense vast web of symmetries; and there's probably prettymuch unlimited scope for spinning new threads. I sometimes wonder whether some kind of algebraic structure could be forged out of it - something a bit like a group maybe. I'll leave actually doing that to the serious heavyweights, though ... if they think there's any mileage in it atall! And the gudermannian function ... it is lovable! ... and people are looking at a depiction of it all over the place & every day in the Mercator projection. Another one I recently discovered is sl - sn for k=i.
– AmbretteOrrisey
Nov 20 '18 at 20:28
It's a kind of dense vast web of symmetries; and there's probably prettymuch unlimited scope for spinning new threads. I sometimes wonder whether some kind of algebraic structure could be forged out of it - something a bit like a group maybe. I'll leave actually doing that to the serious heavyweights, though ... if they think there's any mileage in it atall! And the gudermannian function ... it is lovable! ... and people are looking at a depiction of it all over the place & every day in the Mercator projection. Another one I recently discovered is sl - sn for k=i.
– AmbretteOrrisey
Nov 20 '18 at 20:28
@AmbrettrOrrisey The inverse is called the Lambertian in the context of Mercator projection. Proving its conjectured value was a major unsolved problem in the mid-16th century. Of course, is they'd known the FTC, then it would have been trivial. I've always been disappointed no-one seems to have access to the very first proof of the integral from 1668.
– J.G.
Nov 20 '18 at 20:46
@AmbrettrOrrisey The inverse is called the Lambertian in the context of Mercator projection. Proving its conjectured value was a major unsolved problem in the mid-16th century. Of course, is they'd known the FTC, then it would have been trivial. I've always been disappointed no-one seems to have access to the very first proof of the integral from 1668.
– J.G.
Nov 20 '18 at 20:46
Sometimes in these archaic proofs they effectively do use integral calculus without actually broaching it explicitly as a particular reasoning process in it's own right. This is so in the case of the proof by ... was it Archimedes? o gf
– AmbretteOrrisey
Nov 20 '18 at 20:54
Sometimes in these archaic proofs they effectively do use integral calculus without actually broaching it explicitly as a particular reasoning process in it's own right. This is so in the case of the proof by ... was it Archimedes? o gf
– AmbretteOrrisey
Nov 20 '18 at 20:54
of the formula for the volume of a a sphere, with the 4/3 factor entering in. ¶ I didn't know the inverse of the Gudermannian was canonised & named after Lambert (who of course was a cartographer, there being a few projections named after him ... assuming it's the same Lambert, which is pretty safe, I think!). Yep thanks for that instruction. For sure, it was hard work in those days, without the differential & integral calculus!
– AmbretteOrrisey
Nov 20 '18 at 20:59
of the formula for the volume of a a sphere, with the 4/3 factor entering in. ¶ I didn't know the inverse of the Gudermannian was canonised & named after Lambert (who of course was a cartographer, there being a few projections named after him ... assuming it's the same Lambert, which is pretty safe, I think!). Yep thanks for that instruction. For sure, it was hard work in those days, without the differential & integral calculus!
– AmbretteOrrisey
Nov 20 '18 at 20:59
add a comment |
I've managed to arrive at some degree of generalisation of this - infact through that observation that $operatorname{asech}$ & $operatorname{asec}$ are both integrals of 'complementary' functions - complementary in the sense of having occurences of $1-x$ replaced with $x-1$ as the point $x=1$ is traversed. An observation that has arisen through this little exercise is that say we have a function $operatorname{f}$, and we integrate it to get the primitive of $operatorname{f}$, say $operatorname{g}$, and then we take the inverse of that, say $operatorname{ag}$, and integrate that, it is actually equivalent to integrating $operatorname{f}$ multiplied by the ordinate. This is easy to see on a graph: the integral just described is the area enclosed under the curve cut-off by the horizontal line joining (0, $operatorname{f}(x)$) to ($x$, $operatorname{f}(x)$), which is
$$int x.d(g(x)) = int xf(x).dx .$$
. In this particular case and considering $0leq xleq 1$, we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{1-upsilon^2}}=operatorname{asech}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{1-upsilon^2}}=operatorname{acos}(x), $$
and (using $y$, to emphasise that we are switching to viewlng the integration along the $y$ axis)
$$int_0^yoperatorname{sech}upsilon.dupsilon=operatorname{acos}x=operatorname{acos}(operatorname{sech}y) ,$$
which is indeed one of the formulæ arrived at in the main body of the question.
Likewise for $xgeq 1$ we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{upsilon^2-1}}=operatorname{asec}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{upsilon^2-1}}=operatorname{acosh}(x), $$
and (using $y$, to emphasise ... again)
$$int_0^yoperatorname{sec}upsilon.dupsilon=operatorname{acosh}x=operatorname{acosh}(operatorname{sec}y) ,$$
Which is the corresponding formula arrived at in the main body.
It can be seen with a little consideration that the being inverses of each other of the two integrals in $y$ hinges on its being so that the functions
$$operatorname{acosh} & operatorname{asech}$$
are functions $operatorname{phi} & operatorname{psi}$ such that
$$operatorname{phi}(x)equivoperatorname{psi}(1/x) ;$$
and likewise for
$$operatorname{acos} & operatorname{asec} .$$
Translating this into terms of the origin of these functions as integrals of
$$operatorname{f_l}(x)equivfrac{1}{xsqrt{1-x^2}} &$$
$$operatorname{f_r}(x)equivfrac{1}{xsqrt{x^2-1}}, $$
(l=left & b=right), that which is what it hinges on is that
$$operatorname{f_a}(x).dxequiv (1/x)operatorname{f_b}(1/x) &$$
$$operatorname{f_a}(1/x).d(1/x)equiv xoperatorname{f_b}(x), $$
where a=l or r & b=r or l. This is very fiddly to put into symbols: but basically what it hinges on is that if $1/x$ be substituted for $x$ in $operatorname{f_l}.dx$ then $x.operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_l}.dx$ then $operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $operatorname{f_r}.dx$ then $x.operatorname{f_l}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_r}.dx$ then $operatorname{f_l}.dx$ should be obtained: basically just thorough reciprocity under those transformations.
It can also be seen that this happens if the function $operatorname{f}$ is of the form reciprocal of ($x$×function that changes $1-x$ into $x-1$ or $x-1$ into $1-x$ when $1/x$ is substituted for $x$ and it is multiplied by $x$ from $d(1/x)=-dx/x^2$). As I said, this is terribly fiddly to put into words & symbols; but the underlying idea is really quite an elementary one.
So we could construct pairs of functions of the following form:
$$frac{1}{x((1-x)^k.(1+x)^{n-k})^{1/n}} $$
&
$$frac{1}{x((x-1)^k.(1+x)^{n-k})^{1/n}} ,$$
where $n$ is a natural number, and $k$ is one of the Euler totient set of $n$; and the functions obtained by taking the primitives of these ought to have this reciprocal property we have been discussing of the integrals of their inverses being mutual inverses.
It actually works beautifully for the case n=1, even though that results in integrals that have infinities; and it's clear that it works for n=2, as that is the case that prompted this post in the first place. As for higher values of n ... anything > 2 results in thoroughly diabolickal hypergeometric functions that have who-knows-what inverses, and I shall have to feed them into some kind of mathematics package to test them for this property. It's actually not so bad though for mutual inverses (a bit like it's not so bad testing whether two numbers are coprime): you don't have to actually compute the inverses - you can just feed one into the other as an argument & see whether the identity function drops out.
For n=1, the pair is
$$-ln(e^{-y}+1)$$
&
$$-ln(e^{-y}-1) .$$
The former is from
$$frac{-1}{x(1-x)}$$
... integral ...
$$lnfrac{1-x}{x}$$
... inverse ...
$$frac{1}{1+e^y}$$
... integral ...
$$-ln(e^{-y}+1) ;$$
& the latter from
$$frac{1}{x(x-1)}$$
... integral ...
$$lnfrac{x-1}{x}$$
... inverse ...
$$frac{1}{1-e^y}$$
... integral ...
$$-ln(e^{-y}-1) .$$
This isn't as much of a generalisation as I was hoping for; but it does at least exhibit the original pair of functions being queried as being part of some kind of pattern, and not just an isolated special case that just happens to have that property because it does!
The functions gotten in $n=1$ régime are worthy of some comment, I think, on various grounds. For one thing, it will be observed that in this case $operatorname{f_l} & operatorname{f_r}$ are not actually different atall! However ... the integrals are different functions. And also we get infinities in this case, but fortunately it doesn't foil the reasoning. If we look at the whole matter graphically, we see that in our familiar $n=2$ case, the two integrals $operatorname{g_l} & operatorname{g_r}$ spring from the point (1,0) square-root-wise; whereas the functions under $n=1$ proceed asymptotically from (1, -∞); and for $n>1$ they will proceed from the point (1,0) with behaviour tending to linearity - ^1-1/n, to be more precise - so the vertical & then curving down section will get progressively tighter with increasing $n$. This brings to the next point: the functions for $n=1$ positively abound in negativity; normally I would want to translate the whole affair into a region better-populated by positive numbers ... but I think in this case they are better left as they are, as they better-exhibit that continuity & progression of behaviour just described. It applies also to the functions that constitute the final result: like any bona-fide self-respecting mutual inverses they ought to be mutual refections in the line $x=y$; in the $n=2$ case they spring from the origin & 'splay-out' to become asypmtotic one to the line $x=pi/2$ the other to the line $y=pi/2$, whereas in $n=1$ case they approach asymptotically from (-∞, -∞) along the negative branch of the identity-function graph, and splay-out to become asymptotic, one to the $x$-axis, the other to the $y$, on the negative faces of them: this might incur that most of the affair is in terms of negative numbers ... but it the better shows that continuity of evolution of behaviour with increasing $n$.
Images for Case $n=2$ (the original subject of the post)
The upper diagram shows the functions $y=operatorname{asech}x$ & $y=operatorname{asec}x$, and the lower the integrals of these along the $y$-axis $x=operatorname{acos}operatorname{sech}y$ & $x=operatorname{acosh}operatorname{sec}y$.
Images for Case $n=1$
The upper diagram shows the functions $y=operatorname{ln}((1-x)/x)$ & $y=operatorname{ln}((x-1)/x)$, and the lower the integrals of these along the $y$-axis $x=-operatorname{ln}(e^{-y}+1)$ & $x=-operatorname{ln}(e^{-y}-1)$.
add a comment |
I've managed to arrive at some degree of generalisation of this - infact through that observation that $operatorname{asech}$ & $operatorname{asec}$ are both integrals of 'complementary' functions - complementary in the sense of having occurences of $1-x$ replaced with $x-1$ as the point $x=1$ is traversed. An observation that has arisen through this little exercise is that say we have a function $operatorname{f}$, and we integrate it to get the primitive of $operatorname{f}$, say $operatorname{g}$, and then we take the inverse of that, say $operatorname{ag}$, and integrate that, it is actually equivalent to integrating $operatorname{f}$ multiplied by the ordinate. This is easy to see on a graph: the integral just described is the area enclosed under the curve cut-off by the horizontal line joining (0, $operatorname{f}(x)$) to ($x$, $operatorname{f}(x)$), which is
$$int x.d(g(x)) = int xf(x).dx .$$
. In this particular case and considering $0leq xleq 1$, we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{1-upsilon^2}}=operatorname{asech}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{1-upsilon^2}}=operatorname{acos}(x), $$
and (using $y$, to emphasise that we are switching to viewlng the integration along the $y$ axis)
$$int_0^yoperatorname{sech}upsilon.dupsilon=operatorname{acos}x=operatorname{acos}(operatorname{sech}y) ,$$
which is indeed one of the formulæ arrived at in the main body of the question.
Likewise for $xgeq 1$ we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{upsilon^2-1}}=operatorname{asec}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{upsilon^2-1}}=operatorname{acosh}(x), $$
and (using $y$, to emphasise ... again)
$$int_0^yoperatorname{sec}upsilon.dupsilon=operatorname{acosh}x=operatorname{acosh}(operatorname{sec}y) ,$$
Which is the corresponding formula arrived at in the main body.
It can be seen with a little consideration that the being inverses of each other of the two integrals in $y$ hinges on its being so that the functions
$$operatorname{acosh} & operatorname{asech}$$
are functions $operatorname{phi} & operatorname{psi}$ such that
$$operatorname{phi}(x)equivoperatorname{psi}(1/x) ;$$
and likewise for
$$operatorname{acos} & operatorname{asec} .$$
Translating this into terms of the origin of these functions as integrals of
$$operatorname{f_l}(x)equivfrac{1}{xsqrt{1-x^2}} &$$
$$operatorname{f_r}(x)equivfrac{1}{xsqrt{x^2-1}}, $$
(l=left & b=right), that which is what it hinges on is that
$$operatorname{f_a}(x).dxequiv (1/x)operatorname{f_b}(1/x) &$$
$$operatorname{f_a}(1/x).d(1/x)equiv xoperatorname{f_b}(x), $$
where a=l or r & b=r or l. This is very fiddly to put into symbols: but basically what it hinges on is that if $1/x$ be substituted for $x$ in $operatorname{f_l}.dx$ then $x.operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_l}.dx$ then $operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $operatorname{f_r}.dx$ then $x.operatorname{f_l}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_r}.dx$ then $operatorname{f_l}.dx$ should be obtained: basically just thorough reciprocity under those transformations.
It can also be seen that this happens if the function $operatorname{f}$ is of the form reciprocal of ($x$×function that changes $1-x$ into $x-1$ or $x-1$ into $1-x$ when $1/x$ is substituted for $x$ and it is multiplied by $x$ from $d(1/x)=-dx/x^2$). As I said, this is terribly fiddly to put into words & symbols; but the underlying idea is really quite an elementary one.
So we could construct pairs of functions of the following form:
$$frac{1}{x((1-x)^k.(1+x)^{n-k})^{1/n}} $$
&
$$frac{1}{x((x-1)^k.(1+x)^{n-k})^{1/n}} ,$$
where $n$ is a natural number, and $k$ is one of the Euler totient set of $n$; and the functions obtained by taking the primitives of these ought to have this reciprocal property we have been discussing of the integrals of their inverses being mutual inverses.
It actually works beautifully for the case n=1, even though that results in integrals that have infinities; and it's clear that it works for n=2, as that is the case that prompted this post in the first place. As for higher values of n ... anything > 2 results in thoroughly diabolickal hypergeometric functions that have who-knows-what inverses, and I shall have to feed them into some kind of mathematics package to test them for this property. It's actually not so bad though for mutual inverses (a bit like it's not so bad testing whether two numbers are coprime): you don't have to actually compute the inverses - you can just feed one into the other as an argument & see whether the identity function drops out.
For n=1, the pair is
$$-ln(e^{-y}+1)$$
&
$$-ln(e^{-y}-1) .$$
The former is from
$$frac{-1}{x(1-x)}$$
... integral ...
$$lnfrac{1-x}{x}$$
... inverse ...
$$frac{1}{1+e^y}$$
... integral ...
$$-ln(e^{-y}+1) ;$$
& the latter from
$$frac{1}{x(x-1)}$$
... integral ...
$$lnfrac{x-1}{x}$$
... inverse ...
$$frac{1}{1-e^y}$$
... integral ...
$$-ln(e^{-y}-1) .$$
This isn't as much of a generalisation as I was hoping for; but it does at least exhibit the original pair of functions being queried as being part of some kind of pattern, and not just an isolated special case that just happens to have that property because it does!
The functions gotten in $n=1$ régime are worthy of some comment, I think, on various grounds. For one thing, it will be observed that in this case $operatorname{f_l} & operatorname{f_r}$ are not actually different atall! However ... the integrals are different functions. And also we get infinities in this case, but fortunately it doesn't foil the reasoning. If we look at the whole matter graphically, we see that in our familiar $n=2$ case, the two integrals $operatorname{g_l} & operatorname{g_r}$ spring from the point (1,0) square-root-wise; whereas the functions under $n=1$ proceed asymptotically from (1, -∞); and for $n>1$ they will proceed from the point (1,0) with behaviour tending to linearity - ^1-1/n, to be more precise - so the vertical & then curving down section will get progressively tighter with increasing $n$. This brings to the next point: the functions for $n=1$ positively abound in negativity; normally I would want to translate the whole affair into a region better-populated by positive numbers ... but I think in this case they are better left as they are, as they better-exhibit that continuity & progression of behaviour just described. It applies also to the functions that constitute the final result: like any bona-fide self-respecting mutual inverses they ought to be mutual refections in the line $x=y$; in the $n=2$ case they spring from the origin & 'splay-out' to become asypmtotic one to the line $x=pi/2$ the other to the line $y=pi/2$, whereas in $n=1$ case they approach asymptotically from (-∞, -∞) along the negative branch of the identity-function graph, and splay-out to become asymptotic, one to the $x$-axis, the other to the $y$, on the negative faces of them: this might incur that most of the affair is in terms of negative numbers ... but it the better shows that continuity of evolution of behaviour with increasing $n$.
Images for Case $n=2$ (the original subject of the post)
The upper diagram shows the functions $y=operatorname{asech}x$ & $y=operatorname{asec}x$, and the lower the integrals of these along the $y$-axis $x=operatorname{acos}operatorname{sech}y$ & $x=operatorname{acosh}operatorname{sec}y$.
Images for Case $n=1$
The upper diagram shows the functions $y=operatorname{ln}((1-x)/x)$ & $y=operatorname{ln}((x-1)/x)$, and the lower the integrals of these along the $y$-axis $x=-operatorname{ln}(e^{-y}+1)$ & $x=-operatorname{ln}(e^{-y}-1)$.
add a comment |
I've managed to arrive at some degree of generalisation of this - infact through that observation that $operatorname{asech}$ & $operatorname{asec}$ are both integrals of 'complementary' functions - complementary in the sense of having occurences of $1-x$ replaced with $x-1$ as the point $x=1$ is traversed. An observation that has arisen through this little exercise is that say we have a function $operatorname{f}$, and we integrate it to get the primitive of $operatorname{f}$, say $operatorname{g}$, and then we take the inverse of that, say $operatorname{ag}$, and integrate that, it is actually equivalent to integrating $operatorname{f}$ multiplied by the ordinate. This is easy to see on a graph: the integral just described is the area enclosed under the curve cut-off by the horizontal line joining (0, $operatorname{f}(x)$) to ($x$, $operatorname{f}(x)$), which is
$$int x.d(g(x)) = int xf(x).dx .$$
. In this particular case and considering $0leq xleq 1$, we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{1-upsilon^2}}=operatorname{asech}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{1-upsilon^2}}=operatorname{acos}(x), $$
and (using $y$, to emphasise that we are switching to viewlng the integration along the $y$ axis)
$$int_0^yoperatorname{sech}upsilon.dupsilon=operatorname{acos}x=operatorname{acos}(operatorname{sech}y) ,$$
which is indeed one of the formulæ arrived at in the main body of the question.
Likewise for $xgeq 1$ we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{upsilon^2-1}}=operatorname{asec}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{upsilon^2-1}}=operatorname{acosh}(x), $$
and (using $y$, to emphasise ... again)
$$int_0^yoperatorname{sec}upsilon.dupsilon=operatorname{acosh}x=operatorname{acosh}(operatorname{sec}y) ,$$
Which is the corresponding formula arrived at in the main body.
It can be seen with a little consideration that the being inverses of each other of the two integrals in $y$ hinges on its being so that the functions
$$operatorname{acosh} & operatorname{asech}$$
are functions $operatorname{phi} & operatorname{psi}$ such that
$$operatorname{phi}(x)equivoperatorname{psi}(1/x) ;$$
and likewise for
$$operatorname{acos} & operatorname{asec} .$$
Translating this into terms of the origin of these functions as integrals of
$$operatorname{f_l}(x)equivfrac{1}{xsqrt{1-x^2}} &$$
$$operatorname{f_r}(x)equivfrac{1}{xsqrt{x^2-1}}, $$
(l=left & b=right), that which is what it hinges on is that
$$operatorname{f_a}(x).dxequiv (1/x)operatorname{f_b}(1/x) &$$
$$operatorname{f_a}(1/x).d(1/x)equiv xoperatorname{f_b}(x), $$
where a=l or r & b=r or l. This is very fiddly to put into symbols: but basically what it hinges on is that if $1/x$ be substituted for $x$ in $operatorname{f_l}.dx$ then $x.operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_l}.dx$ then $operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $operatorname{f_r}.dx$ then $x.operatorname{f_l}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_r}.dx$ then $operatorname{f_l}.dx$ should be obtained: basically just thorough reciprocity under those transformations.
It can also be seen that this happens if the function $operatorname{f}$ is of the form reciprocal of ($x$×function that changes $1-x$ into $x-1$ or $x-1$ into $1-x$ when $1/x$ is substituted for $x$ and it is multiplied by $x$ from $d(1/x)=-dx/x^2$). As I said, this is terribly fiddly to put into words & symbols; but the underlying idea is really quite an elementary one.
So we could construct pairs of functions of the following form:
$$frac{1}{x((1-x)^k.(1+x)^{n-k})^{1/n}} $$
&
$$frac{1}{x((x-1)^k.(1+x)^{n-k})^{1/n}} ,$$
where $n$ is a natural number, and $k$ is one of the Euler totient set of $n$; and the functions obtained by taking the primitives of these ought to have this reciprocal property we have been discussing of the integrals of their inverses being mutual inverses.
It actually works beautifully for the case n=1, even though that results in integrals that have infinities; and it's clear that it works for n=2, as that is the case that prompted this post in the first place. As for higher values of n ... anything > 2 results in thoroughly diabolickal hypergeometric functions that have who-knows-what inverses, and I shall have to feed them into some kind of mathematics package to test them for this property. It's actually not so bad though for mutual inverses (a bit like it's not so bad testing whether two numbers are coprime): you don't have to actually compute the inverses - you can just feed one into the other as an argument & see whether the identity function drops out.
For n=1, the pair is
$$-ln(e^{-y}+1)$$
&
$$-ln(e^{-y}-1) .$$
The former is from
$$frac{-1}{x(1-x)}$$
... integral ...
$$lnfrac{1-x}{x}$$
... inverse ...
$$frac{1}{1+e^y}$$
... integral ...
$$-ln(e^{-y}+1) ;$$
& the latter from
$$frac{1}{x(x-1)}$$
... integral ...
$$lnfrac{x-1}{x}$$
... inverse ...
$$frac{1}{1-e^y}$$
... integral ...
$$-ln(e^{-y}-1) .$$
This isn't as much of a generalisation as I was hoping for; but it does at least exhibit the original pair of functions being queried as being part of some kind of pattern, and not just an isolated special case that just happens to have that property because it does!
The functions gotten in $n=1$ régime are worthy of some comment, I think, on various grounds. For one thing, it will be observed that in this case $operatorname{f_l} & operatorname{f_r}$ are not actually different atall! However ... the integrals are different functions. And also we get infinities in this case, but fortunately it doesn't foil the reasoning. If we look at the whole matter graphically, we see that in our familiar $n=2$ case, the two integrals $operatorname{g_l} & operatorname{g_r}$ spring from the point (1,0) square-root-wise; whereas the functions under $n=1$ proceed asymptotically from (1, -∞); and for $n>1$ they will proceed from the point (1,0) with behaviour tending to linearity - ^1-1/n, to be more precise - so the vertical & then curving down section will get progressively tighter with increasing $n$. This brings to the next point: the functions for $n=1$ positively abound in negativity; normally I would want to translate the whole affair into a region better-populated by positive numbers ... but I think in this case they are better left as they are, as they better-exhibit that continuity & progression of behaviour just described. It applies also to the functions that constitute the final result: like any bona-fide self-respecting mutual inverses they ought to be mutual refections in the line $x=y$; in the $n=2$ case they spring from the origin & 'splay-out' to become asypmtotic one to the line $x=pi/2$ the other to the line $y=pi/2$, whereas in $n=1$ case they approach asymptotically from (-∞, -∞) along the negative branch of the identity-function graph, and splay-out to become asymptotic, one to the $x$-axis, the other to the $y$, on the negative faces of them: this might incur that most of the affair is in terms of negative numbers ... but it the better shows that continuity of evolution of behaviour with increasing $n$.
Images for Case $n=2$ (the original subject of the post)
The upper diagram shows the functions $y=operatorname{asech}x$ & $y=operatorname{asec}x$, and the lower the integrals of these along the $y$-axis $x=operatorname{acos}operatorname{sech}y$ & $x=operatorname{acosh}operatorname{sec}y$.
Images for Case $n=1$
The upper diagram shows the functions $y=operatorname{ln}((1-x)/x)$ & $y=operatorname{ln}((x-1)/x)$, and the lower the integrals of these along the $y$-axis $x=-operatorname{ln}(e^{-y}+1)$ & $x=-operatorname{ln}(e^{-y}-1)$.
I've managed to arrive at some degree of generalisation of this - infact through that observation that $operatorname{asech}$ & $operatorname{asec}$ are both integrals of 'complementary' functions - complementary in the sense of having occurences of $1-x$ replaced with $x-1$ as the point $x=1$ is traversed. An observation that has arisen through this little exercise is that say we have a function $operatorname{f}$, and we integrate it to get the primitive of $operatorname{f}$, say $operatorname{g}$, and then we take the inverse of that, say $operatorname{ag}$, and integrate that, it is actually equivalent to integrating $operatorname{f}$ multiplied by the ordinate. This is easy to see on a graph: the integral just described is the area enclosed under the curve cut-off by the horizontal line joining (0, $operatorname{f}(x)$) to ($x$, $operatorname{f}(x)$), which is
$$int x.d(g(x)) = int xf(x).dx .$$
. In this particular case and considering $0leq xleq 1$, we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{1-upsilon^2}}=operatorname{asech}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{1-upsilon^2}}=operatorname{acos}(x), $$
and (using $y$, to emphasise that we are switching to viewlng the integration along the $y$ axis)
$$int_0^yoperatorname{sech}upsilon.dupsilon=operatorname{acos}x=operatorname{acos}(operatorname{sech}y) ,$$
which is indeed one of the formulæ arrived at in the main body of the question.
Likewise for $xgeq 1$ we have
$$int_1^xfrac{dupsilon}{upsilonsqrt{upsilon^2-1}}=operatorname{asec}(x), $$
and
$$int_1^xfrac{dupsilon}{sqrt{upsilon^2-1}}=operatorname{acosh}(x), $$
and (using $y$, to emphasise ... again)
$$int_0^yoperatorname{sec}upsilon.dupsilon=operatorname{acosh}x=operatorname{acosh}(operatorname{sec}y) ,$$
Which is the corresponding formula arrived at in the main body.
It can be seen with a little consideration that the being inverses of each other of the two integrals in $y$ hinges on its being so that the functions
$$operatorname{acosh} & operatorname{asech}$$
are functions $operatorname{phi} & operatorname{psi}$ such that
$$operatorname{phi}(x)equivoperatorname{psi}(1/x) ;$$
and likewise for
$$operatorname{acos} & operatorname{asec} .$$
Translating this into terms of the origin of these functions as integrals of
$$operatorname{f_l}(x)equivfrac{1}{xsqrt{1-x^2}} &$$
$$operatorname{f_r}(x)equivfrac{1}{xsqrt{x^2-1}}, $$
(l=left & b=right), that which is what it hinges on is that
$$operatorname{f_a}(x).dxequiv (1/x)operatorname{f_b}(1/x) &$$
$$operatorname{f_a}(1/x).d(1/x)equiv xoperatorname{f_b}(x), $$
where a=l or r & b=r or l. This is very fiddly to put into symbols: but basically what it hinges on is that if $1/x$ be substituted for $x$ in $operatorname{f_l}.dx$ then $x.operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_l}.dx$ then $operatorname{f_r}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $operatorname{f_r}.dx$ then $x.operatorname{f_l}.dx$ should be obtained; if $1/x$ be substituted for $x$ in $x.operatorname{f_r}.dx$ then $operatorname{f_l}.dx$ should be obtained: basically just thorough reciprocity under those transformations.
It can also be seen that this happens if the function $operatorname{f}$ is of the form reciprocal of ($x$×function that changes $1-x$ into $x-1$ or $x-1$ into $1-x$ when $1/x$ is substituted for $x$ and it is multiplied by $x$ from $d(1/x)=-dx/x^2$). As I said, this is terribly fiddly to put into words & symbols; but the underlying idea is really quite an elementary one.
So we could construct pairs of functions of the following form:
$$frac{1}{x((1-x)^k.(1+x)^{n-k})^{1/n}} $$
&
$$frac{1}{x((x-1)^k.(1+x)^{n-k})^{1/n}} ,$$
where $n$ is a natural number, and $k$ is one of the Euler totient set of $n$; and the functions obtained by taking the primitives of these ought to have this reciprocal property we have been discussing of the integrals of their inverses being mutual inverses.
It actually works beautifully for the case n=1, even though that results in integrals that have infinities; and it's clear that it works for n=2, as that is the case that prompted this post in the first place. As for higher values of n ... anything > 2 results in thoroughly diabolickal hypergeometric functions that have who-knows-what inverses, and I shall have to feed them into some kind of mathematics package to test them for this property. It's actually not so bad though for mutual inverses (a bit like it's not so bad testing whether two numbers are coprime): you don't have to actually compute the inverses - you can just feed one into the other as an argument & see whether the identity function drops out.
For n=1, the pair is
$$-ln(e^{-y}+1)$$
&
$$-ln(e^{-y}-1) .$$
The former is from
$$frac{-1}{x(1-x)}$$
... integral ...
$$lnfrac{1-x}{x}$$
... inverse ...
$$frac{1}{1+e^y}$$
... integral ...
$$-ln(e^{-y}+1) ;$$
& the latter from
$$frac{1}{x(x-1)}$$
... integral ...
$$lnfrac{x-1}{x}$$
... inverse ...
$$frac{1}{1-e^y}$$
... integral ...
$$-ln(e^{-y}-1) .$$
This isn't as much of a generalisation as I was hoping for; but it does at least exhibit the original pair of functions being queried as being part of some kind of pattern, and not just an isolated special case that just happens to have that property because it does!
The functions gotten in $n=1$ régime are worthy of some comment, I think, on various grounds. For one thing, it will be observed that in this case $operatorname{f_l} & operatorname{f_r}$ are not actually different atall! However ... the integrals are different functions. And also we get infinities in this case, but fortunately it doesn't foil the reasoning. If we look at the whole matter graphically, we see that in our familiar $n=2$ case, the two integrals $operatorname{g_l} & operatorname{g_r}$ spring from the point (1,0) square-root-wise; whereas the functions under $n=1$ proceed asymptotically from (1, -∞); and for $n>1$ they will proceed from the point (1,0) with behaviour tending to linearity - ^1-1/n, to be more precise - so the vertical & then curving down section will get progressively tighter with increasing $n$. This brings to the next point: the functions for $n=1$ positively abound in negativity; normally I would want to translate the whole affair into a region better-populated by positive numbers ... but I think in this case they are better left as they are, as they better-exhibit that continuity & progression of behaviour just described. It applies also to the functions that constitute the final result: like any bona-fide self-respecting mutual inverses they ought to be mutual refections in the line $x=y$; in the $n=2$ case they spring from the origin & 'splay-out' to become asypmtotic one to the line $x=pi/2$ the other to the line $y=pi/2$, whereas in $n=1$ case they approach asymptotically from (-∞, -∞) along the negative branch of the identity-function graph, and splay-out to become asymptotic, one to the $x$-axis, the other to the $y$, on the negative faces of them: this might incur that most of the affair is in terms of negative numbers ... but it the better shows that continuity of evolution of behaviour with increasing $n$.
Images for Case $n=2$ (the original subject of the post)
The upper diagram shows the functions $y=operatorname{asech}x$ & $y=operatorname{asec}x$, and the lower the integrals of these along the $y$-axis $x=operatorname{acos}operatorname{sech}y$ & $x=operatorname{acosh}operatorname{sec}y$.
Images for Case $n=1$
The upper diagram shows the functions $y=operatorname{ln}((1-x)/x)$ & $y=operatorname{ln}((x-1)/x)$, and the lower the integrals of these along the $y$-axis $x=-operatorname{ln}(e^{-y}+1)$ & $x=-operatorname{ln}(e^{-y}-1)$.
edited Nov 21 '18 at 5:53
community wiki
13 revs
AmbretteOrrisey
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3000987%2fshowing-int-0-int-0u-rm-sechvdv-sec-vdv-equiv-u-and-int-0-int-0u-s%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Very interesting and would be intriguing to look at other pairs too.
– Richard Martin
Nov 16 '18 at 10:54
You see what I mean then - like in other branches of mathematics - linear algebra being fertile ground for this kind of thing - you prove a theorem not by unpacking the content of whatever items it might be that you wish to relate, but by examining the patterns & structures that such items form when taken together as an ensemble. To me that theorem that I have cited here looks for all the world like something that ought to be showable to proceed in its own right from the very recipe itself for circular & hyperbolic functions.
– AmbretteOrrisey
Nov 16 '18 at 11:28
I think, more to the point, the reason this holds is that $$operatorname{sech}{u} + i operatorname{tanh}{u} = exp{left [i arctan{left ( sinh{u} right )} right ]}$$
– Ron Gordon
Nov 16 '18 at 14:57
@Ron Gordon -- I'm getting the hang of this site! I first thought "why on earth did you use LateX notation in a field in which it isn't interpreted!!?" ... then I had the brilliant brilliant idea, which I'm sure no-one has ever thought of before, of copying the text, pasting it into an answer field, and summoning a preview!! Thanks for that contribution. I'll examine it ... but I was actually brewing some thoughts of my own that I was actually just about to send in. I'll put it as an edit to the original question ... I get that prerogative, it being my question!
– AmbretteOrrisey
Nov 16 '18 at 15:49
@AmbretteOrrisey: yeah, that's what I do and did here.
– Ron Gordon
Nov 16 '18 at 15:50