If $f(x)=int_{x-1}^x f(s)ds$, is $f$ constant? Periodic?












8












$begingroup$


I was thinking of periodic functions, and in particular the following type of condition:




If a function $f:mathbb{R}tomathbb{R}$ always "tends to its average", then it should be periodic.




To make things more formal, by "tending to the average" we could say something like $f(x)=int_{x-1}^x f(s)ds$. This is only the average depending on a previous time interval of length $1$, but it seems an interesting enough property. However, the only type of functions which I could find that satisfies this property are the constant ones!




Question: If $f:mathbb{R}tomathbb{R}$ is continuous (or more generaly measurable) and $f(x)=int_{x-1}^x f(s)ds$ for (almost) every $xinmathbb{R}$, then is $f$ constant (a.e.)? Periodic (a.e.)?




Here is a first try for $C^1$ functions (see edit below!): If $f$ is $C^1$ and $x$ is fixed, we can use Taylor expansion $f(s)=f(x)+O(s-x)$ (and similarly for $x-1$) to obtain
begin{align*}
f(x+t)-f(x)&=int_x^{x+t}f(s)ds-int_{x-1}^{x-1+t}f(s)ds\
&=int_x^{x+t}f(x)+O(s-x)ds-int_{x-1}^{x-1+t}f(x-1)+O(s-x+1)ds\
&=t(f(x)-f(x-1))+O(t^2),
end{align*}

so $f'(x)=f(x)-f(x-1)$. This is obviously true if $f$ is constant, but the converse is not clear to me at the moment.





Edit: From a comment and answer below, the equation $f'(x)=f(x)-f(x-1)$ has non-periodic solutions on $mathbb{R}setminusmathbb{Z}$, so this should not be the way to go for $C^1$ functions. However, even in this case it is not clear that any solution of this equation will satisfy $f(x)=int_{x-1}^x f(s)ds$, which is the question: All I can obtain, in principle, is $f(x)-f(x-1)=int_x^{x-1}f(s)ds-int_{x-2}^{x-1}f(s)ds$.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    This is a delay differential equation. Not all solutions are constant or periodic.
    $endgroup$
    – Dylan
    Jan 7 at 9:06


















8












$begingroup$


I was thinking of periodic functions, and in particular the following type of condition:




If a function $f:mathbb{R}tomathbb{R}$ always "tends to its average", then it should be periodic.




To make things more formal, by "tending to the average" we could say something like $f(x)=int_{x-1}^x f(s)ds$. This is only the average depending on a previous time interval of length $1$, but it seems an interesting enough property. However, the only type of functions which I could find that satisfies this property are the constant ones!




Question: If $f:mathbb{R}tomathbb{R}$ is continuous (or more generaly measurable) and $f(x)=int_{x-1}^x f(s)ds$ for (almost) every $xinmathbb{R}$, then is $f$ constant (a.e.)? Periodic (a.e.)?




Here is a first try for $C^1$ functions (see edit below!): If $f$ is $C^1$ and $x$ is fixed, we can use Taylor expansion $f(s)=f(x)+O(s-x)$ (and similarly for $x-1$) to obtain
begin{align*}
f(x+t)-f(x)&=int_x^{x+t}f(s)ds-int_{x-1}^{x-1+t}f(s)ds\
&=int_x^{x+t}f(x)+O(s-x)ds-int_{x-1}^{x-1+t}f(x-1)+O(s-x+1)ds\
&=t(f(x)-f(x-1))+O(t^2),
end{align*}

so $f'(x)=f(x)-f(x-1)$. This is obviously true if $f$ is constant, but the converse is not clear to me at the moment.





Edit: From a comment and answer below, the equation $f'(x)=f(x)-f(x-1)$ has non-periodic solutions on $mathbb{R}setminusmathbb{Z}$, so this should not be the way to go for $C^1$ functions. However, even in this case it is not clear that any solution of this equation will satisfy $f(x)=int_{x-1}^x f(s)ds$, which is the question: All I can obtain, in principle, is $f(x)-f(x-1)=int_x^{x-1}f(s)ds-int_{x-2}^{x-1}f(s)ds$.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    This is a delay differential equation. Not all solutions are constant or periodic.
    $endgroup$
    – Dylan
    Jan 7 at 9:06
















8












8








8


4



$begingroup$


I was thinking of periodic functions, and in particular the following type of condition:




If a function $f:mathbb{R}tomathbb{R}$ always "tends to its average", then it should be periodic.




To make things more formal, by "tending to the average" we could say something like $f(x)=int_{x-1}^x f(s)ds$. This is only the average depending on a previous time interval of length $1$, but it seems an interesting enough property. However, the only type of functions which I could find that satisfies this property are the constant ones!




Question: If $f:mathbb{R}tomathbb{R}$ is continuous (or more generaly measurable) and $f(x)=int_{x-1}^x f(s)ds$ for (almost) every $xinmathbb{R}$, then is $f$ constant (a.e.)? Periodic (a.e.)?




Here is a first try for $C^1$ functions (see edit below!): If $f$ is $C^1$ and $x$ is fixed, we can use Taylor expansion $f(s)=f(x)+O(s-x)$ (and similarly for $x-1$) to obtain
begin{align*}
f(x+t)-f(x)&=int_x^{x+t}f(s)ds-int_{x-1}^{x-1+t}f(s)ds\
&=int_x^{x+t}f(x)+O(s-x)ds-int_{x-1}^{x-1+t}f(x-1)+O(s-x+1)ds\
&=t(f(x)-f(x-1))+O(t^2),
end{align*}

so $f'(x)=f(x)-f(x-1)$. This is obviously true if $f$ is constant, but the converse is not clear to me at the moment.





Edit: From a comment and answer below, the equation $f'(x)=f(x)-f(x-1)$ has non-periodic solutions on $mathbb{R}setminusmathbb{Z}$, so this should not be the way to go for $C^1$ functions. However, even in this case it is not clear that any solution of this equation will satisfy $f(x)=int_{x-1}^x f(s)ds$, which is the question: All I can obtain, in principle, is $f(x)-f(x-1)=int_x^{x-1}f(s)ds-int_{x-2}^{x-1}f(s)ds$.










share|cite|improve this question











$endgroup$




I was thinking of periodic functions, and in particular the following type of condition:




If a function $f:mathbb{R}tomathbb{R}$ always "tends to its average", then it should be periodic.




To make things more formal, by "tending to the average" we could say something like $f(x)=int_{x-1}^x f(s)ds$. This is only the average depending on a previous time interval of length $1$, but it seems an interesting enough property. However, the only type of functions which I could find that satisfies this property are the constant ones!




Question: If $f:mathbb{R}tomathbb{R}$ is continuous (or more generaly measurable) and $f(x)=int_{x-1}^x f(s)ds$ for (almost) every $xinmathbb{R}$, then is $f$ constant (a.e.)? Periodic (a.e.)?




Here is a first try for $C^1$ functions (see edit below!): If $f$ is $C^1$ and $x$ is fixed, we can use Taylor expansion $f(s)=f(x)+O(s-x)$ (and similarly for $x-1$) to obtain
begin{align*}
f(x+t)-f(x)&=int_x^{x+t}f(s)ds-int_{x-1}^{x-1+t}f(s)ds\
&=int_x^{x+t}f(x)+O(s-x)ds-int_{x-1}^{x-1+t}f(x-1)+O(s-x+1)ds\
&=t(f(x)-f(x-1))+O(t^2),
end{align*}

so $f'(x)=f(x)-f(x-1)$. This is obviously true if $f$ is constant, but the converse is not clear to me at the moment.





Edit: From a comment and answer below, the equation $f'(x)=f(x)-f(x-1)$ has non-periodic solutions on $mathbb{R}setminusmathbb{Z}$, so this should not be the way to go for $C^1$ functions. However, even in this case it is not clear that any solution of this equation will satisfy $f(x)=int_{x-1}^x f(s)ds$, which is the question: All I can obtain, in principle, is $f(x)-f(x-1)=int_x^{x-1}f(s)ds-int_{x-2}^{x-1}f(s)ds$.







calculus integration recreational-mathematics average






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 7 at 9:15







Questioner

















asked Jan 7 at 8:36









QuestionerQuestioner

549321




549321








  • 1




    $begingroup$
    This is a delay differential equation. Not all solutions are constant or periodic.
    $endgroup$
    – Dylan
    Jan 7 at 9:06
















  • 1




    $begingroup$
    This is a delay differential equation. Not all solutions are constant or periodic.
    $endgroup$
    – Dylan
    Jan 7 at 9:06










1




1




$begingroup$
This is a delay differential equation. Not all solutions are constant or periodic.
$endgroup$
– Dylan
Jan 7 at 9:06






$begingroup$
This is a delay differential equation. Not all solutions are constant or periodic.
$endgroup$
– Dylan
Jan 7 at 9:06












4 Answers
4






active

oldest

votes


















6





+50







$begingroup$

Here is a typical argument using characteristic equation:



Let $alpha in mathbb{C}setminus{0}$ solve the equation $alpha = 1-e^{-alpha}$. One can indeed prove that such solution exists. One such solution is numerically given by $-2.08884 + 7.46149 i$. Then



$$ int_{x-1}^{x} e^{alpha t} , mathrm{d}t = frac{1 - e^{-alpha}}{alpha} e^{alpha x} = e^{alpha x}, $$



hence $f(x) = e^{alpha x}$ is one (complex-valued) solution of the equation



$$ f(x) = int_{x-1}^{x} f(t) , mathrm{d}t tag{*}$$



If one is interested in real-valued solutions only, then one can consider both the real part and the imaginary part of $e^{alpha x}$. In particular, this tells that there exists an analytic solution of $text{(*)}$ which is neither constant nor having real-period.





Addendum. We prove the following claim:




Claim. There exists a non-zero solution of $alpha = 1 - e^{-alpha}$ in $mathbb{C}$.




Proof. We first note that $varphi(x) = x(1-log x)$ satisfies $varphi(0^+) = 0$ and $varphi(1) = 1$. Next, let $k$ be a positive integer. Then




  1. There exists $y in (2kpi, (2k+frac{1}{2})pi)$ such that $ varphi(sin(y)/y) = cos (y) $, by the intermediate-value theorem.


  2. Set $x = log(sin(y)/y)$.



We claim that $ alpha = x + iy $ solves the equation. Indeed, it is clear that $ e^{-x}sin y = y $ holds. Moreover,



$$ (1-x)e^x = varphi(sin(y)/y) = cos(y), $$



and so, $1 - x = e^{-x}cos(y)$. Combining altogether,



$$ 1 - alpha = 1 - x - iy = e^{-x}cos(y) - ie^{-x}sin(y) = e^{-x-iy} = e^{-alpha}. $$



Therefore the claim follows. ////



(A careful inespection shows that this construction produces all the solutions of $alpha = 1 - e^{-alpha}$ in the upper half-plane.)






share|cite|improve this answer











$endgroup$













  • $begingroup$
    I think you want $varphi(x)=x(1-log x)$ instead. It satisfies the same properties, and $$(1-x)e^x=(1-log(sin(y)/y))sin(y)/y.$$ Other than that, the real part $f(t)$ of $tmapsto e^{alpha t}$ will be $f(t)=e^{xt}cos(yt)$, a non-periodic solution of the original equation.
    $endgroup$
    – Questioner
    Jan 16 at 11:59












  • $begingroup$
    @Questioner, That's a nice catch! I will fix it :)
    $endgroup$
    – Sangchul Lee
    Jan 19 at 10:22



















2












$begingroup$

This is a partial answer where I provided some properties for $f$ that complies with
$$
f(x)=int_{x-1}^xf(s),{rm d}s
$$

for all $xinmathbb{R}$. I will update this thread as I make any further progress.




1. $fin C^{infty}(mathbb{R})$ if $fin L_{rm loc}^1(mathbb{R})$ ($f$ must be smooth if it is locally integrable).




Proof. $forall,x_0inmathbb{R}$, $forall,xge x_0+1$, we have
$$
f(x)=int_{x_0}^xf(s),{rm d}s-int_{x_0}^{x-1}f(s),{rm d}s.
$$

Since $fin L_{rm loc}^1(mathbb{R})$, it follows that
$$
int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
$$

are absolutely continuous. Thus $f$ is absolutely continuous on $left(x_0+1,inftyright)$. The arbitrariness of $x_0$ implies that $f$ is absolutely continuous on $mathbb{R}$. Thus necessarily, $fin C(mathbb{R})$.



Likewise, since $fin C(mathbb{R})$, it follows that
$$
int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
$$

are continuously differentiable, which leads to $fin C^1(mathbb{R})$.



Repeat the above reasoning inductively, and we eventually obtain $fin C^{infty}(mathbb{R})$.$#$



This conclusion suggests that, at least for a most general case, we shall only consider those $f$'s that are smooth on $mathbb{R}$.




2. $f=0$ if $fin L^1(mathbb{R})$ ($f$ must be zero if it is integrable on $mathbb{R}$).




Proof. Since $fin L^1(mathbb{R})$, it is obvious that
$$
int_{x-1}^xf(s),{rm d}s=int_{mathbb{R}}1_{left[0,1right]}(x-s)f(s),{rm d}s=left(1_{left[0,1right]}*fright)(x).
$$

Hence, the original relation is equivalent to the following convolution equation on $mathbb{R}$:
$$
f=1_{left[0,1right]}*f.
$$



Note that $fin L^1(mathbb{R})$, and its Fourier transform $hat{f}$ is well-defined. By the convolution theorem,
$$
hat{f}=widehat{1_{left[0,1right]}},hat{f}Longrightarrowleft(1-widehat{1_{left[0,1right]}}right)hat{f}=0.
$$

This implies that $hat{f}=hat{f}(xi)=0$ for all $xine 0$. Besides, the continuity of $hat{f}$ yields $hat{f}(0)=0$. Consequently, we have
$$
hat{f}=0iff f=0.#
$$



This conclusion suggests that any non-trivial solution to the original equation must be non-integrable on $mathbb{R}$, e.g., non-zero constants. Nevertheless, given that it is reasonable to assume $f$ to be locally integrable, the original equation can always be formulated in the convolution form $f=1_{left[0,1right]}*f$. Just note that $hat{f}$ is not defined and the convolution theorem no longer applies if $f$ is locally integrable but not integrable on $mathbb{R}$.




3. $fequivtext{const}$ if $fin L_{rm loc}^1(mathbb{R})cap C_T(mathbb{R})$ ($f$ must be constant if it is locally integrable and $T$-periodic).




Proof. Since $fin L_{rm loc}^1(mathbb{R})$, we have $fin C^{infty}(left[0,Tright])$. Thanks to the periodicity, $f$ observes its Fourier series on $left[0,Tright]$
$$
f(x)simsum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
$$



Since $fin C^{infty}(left[0,Tright])$, the Fourier series of $f$ converges absolutely and uniformly to $f$. Therefore, we have for one thing,
$$
f(x)=sum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
$$

For another,
begin{align}
int_{x-1}^xf(s),{rm d}s&=int_{x-1}^xsum_{ninmathbb{Z}}a_n,e^{frac{2ipi ns}{T}}{rm d}s\
&=sum_{ninmathbb{Z}}a_nint_{x-1}^xe^{frac{2ipi ns}{T}}{rm d}s\
&=sum_{ninmathbb{Z}}a_nfrac{1-e^{-theta_n}}{theta_n}e^{frac{2ipi nx}{T}},
end{align}

where $theta_n=2ipi n/T$, and $left(1-e^{-theta_0}right)/theta_0=1$ (this is defined so that the form of the series preserves; otherwise, one may carry out the integration separately for $n=0$ and $nne 0$, and may find the results identical).



Thanks to this result, the original equation requires
$$
a_n=a_nfrac{1-e^{-theta_n}}{theta_n}iffleft(1-frac{1-e^{-theta_n}}{theta_n}right)a_n=0.
$$

Note that
$$
1-frac{1-e^{-z}}{z}=0
$$

yields only one solution on $imathbb{R}$ (the imaginary axis), i.e., $z=0$. Thus since $theta_nne 0$ for all $nne 0$, it is a must that $a_n=0$ for all $nne 0$. This leads to $f(x)=a_0$, i.e., $f$ is constant on $mathbb{R}$.$#$



This conclusion suggests that any continuous periodic function that satisfies the original equation must be constant.



[TBC...]



[Following @Empy2's answer, I believe the existence of some non-periodic solution to the original equation. Yet as per the above properties, this solution has to be smooth and most likely unbounded. Trying the polynomial series $f(x)=sum_{n=0}^{infty}a_nx^n$ could be promising, but it leads to an infinite dimensional linear system, and its convergence also remains unknown, which challenges the commutativity of summation and integration...]






share|cite|improve this answer











$endgroup$





















    1












    $begingroup$

    Define $f(x)=3x^2-4x+1$ for $xin(0,1)$ so the differential equation is true for $x=1$.

    For $xin(1,2)$, solve the differential equation
    $$frac{df}{dx}=f(x)-(3(x-1)^2-4(x-1)+1)$$
    Iterate the procedure, for $xin (2,3)$ and so on.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      That's nice. However, how do we define $f(x)$ for $x<0$? Also, I have not proved yet that the equation $f'(x)=f(x)-f(x-1)$ implies that $f$ satisfies the property described above.
      $endgroup$
      – Questioner
      Jan 7 at 9:09










    • $begingroup$
      Nevermind the first part of the comment above: For $xin[-1,0]$ we have $f'(x+1)=f(x+1)-f(x)$, so this defines $f$ on $[-1,1]$ and we iterate...
      $endgroup$
      – Questioner
      Jan 7 at 9:13










    • $begingroup$
      @Questioner Note that$$f(1)=int_0^1f(t),mathrm dt=0,$$thus integrating $f'(t)=f(t)-f(t-1)$ for $tin[1,x]$ yields$$f(x)=int_1^xf(t),mathrm dt-int_0^{x-1}f(t),mathrm dt=int_{x-1}^xf(t),mathrm dt$$for $xgeqslant1$.
      $endgroup$
      – Saad
      Jan 12 at 2:09





















    -1












    $begingroup$

    Let
    $$f(x) = dfrac{a_0}2+sumlimits_{n=1}^inftyleft(a_ncos2pi nx + a_nsin2pi nxright),$$
    then
    $$intlimits_{x-1}^x f(x),mathrm dx = dfrac {a_0}2,$$
    so $f(x)$ is a constant.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3064768%2fif-fx-int-x-1x-fsds-is-f-constant-periodic%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      6





      +50







      $begingroup$

      Here is a typical argument using characteristic equation:



      Let $alpha in mathbb{C}setminus{0}$ solve the equation $alpha = 1-e^{-alpha}$. One can indeed prove that such solution exists. One such solution is numerically given by $-2.08884 + 7.46149 i$. Then



      $$ int_{x-1}^{x} e^{alpha t} , mathrm{d}t = frac{1 - e^{-alpha}}{alpha} e^{alpha x} = e^{alpha x}, $$



      hence $f(x) = e^{alpha x}$ is one (complex-valued) solution of the equation



      $$ f(x) = int_{x-1}^{x} f(t) , mathrm{d}t tag{*}$$



      If one is interested in real-valued solutions only, then one can consider both the real part and the imaginary part of $e^{alpha x}$. In particular, this tells that there exists an analytic solution of $text{(*)}$ which is neither constant nor having real-period.





      Addendum. We prove the following claim:




      Claim. There exists a non-zero solution of $alpha = 1 - e^{-alpha}$ in $mathbb{C}$.




      Proof. We first note that $varphi(x) = x(1-log x)$ satisfies $varphi(0^+) = 0$ and $varphi(1) = 1$. Next, let $k$ be a positive integer. Then




      1. There exists $y in (2kpi, (2k+frac{1}{2})pi)$ such that $ varphi(sin(y)/y) = cos (y) $, by the intermediate-value theorem.


      2. Set $x = log(sin(y)/y)$.



      We claim that $ alpha = x + iy $ solves the equation. Indeed, it is clear that $ e^{-x}sin y = y $ holds. Moreover,



      $$ (1-x)e^x = varphi(sin(y)/y) = cos(y), $$



      and so, $1 - x = e^{-x}cos(y)$. Combining altogether,



      $$ 1 - alpha = 1 - x - iy = e^{-x}cos(y) - ie^{-x}sin(y) = e^{-x-iy} = e^{-alpha}. $$



      Therefore the claim follows. ////



      (A careful inespection shows that this construction produces all the solutions of $alpha = 1 - e^{-alpha}$ in the upper half-plane.)






      share|cite|improve this answer











      $endgroup$













      • $begingroup$
        I think you want $varphi(x)=x(1-log x)$ instead. It satisfies the same properties, and $$(1-x)e^x=(1-log(sin(y)/y))sin(y)/y.$$ Other than that, the real part $f(t)$ of $tmapsto e^{alpha t}$ will be $f(t)=e^{xt}cos(yt)$, a non-periodic solution of the original equation.
        $endgroup$
        – Questioner
        Jan 16 at 11:59












      • $begingroup$
        @Questioner, That's a nice catch! I will fix it :)
        $endgroup$
        – Sangchul Lee
        Jan 19 at 10:22
















      6





      +50







      $begingroup$

      Here is a typical argument using characteristic equation:



      Let $alpha in mathbb{C}setminus{0}$ solve the equation $alpha = 1-e^{-alpha}$. One can indeed prove that such solution exists. One such solution is numerically given by $-2.08884 + 7.46149 i$. Then



      $$ int_{x-1}^{x} e^{alpha t} , mathrm{d}t = frac{1 - e^{-alpha}}{alpha} e^{alpha x} = e^{alpha x}, $$



      hence $f(x) = e^{alpha x}$ is one (complex-valued) solution of the equation



      $$ f(x) = int_{x-1}^{x} f(t) , mathrm{d}t tag{*}$$



      If one is interested in real-valued solutions only, then one can consider both the real part and the imaginary part of $e^{alpha x}$. In particular, this tells that there exists an analytic solution of $text{(*)}$ which is neither constant nor having real-period.





      Addendum. We prove the following claim:




      Claim. There exists a non-zero solution of $alpha = 1 - e^{-alpha}$ in $mathbb{C}$.




      Proof. We first note that $varphi(x) = x(1-log x)$ satisfies $varphi(0^+) = 0$ and $varphi(1) = 1$. Next, let $k$ be a positive integer. Then




      1. There exists $y in (2kpi, (2k+frac{1}{2})pi)$ such that $ varphi(sin(y)/y) = cos (y) $, by the intermediate-value theorem.


      2. Set $x = log(sin(y)/y)$.



      We claim that $ alpha = x + iy $ solves the equation. Indeed, it is clear that $ e^{-x}sin y = y $ holds. Moreover,



      $$ (1-x)e^x = varphi(sin(y)/y) = cos(y), $$



      and so, $1 - x = e^{-x}cos(y)$. Combining altogether,



      $$ 1 - alpha = 1 - x - iy = e^{-x}cos(y) - ie^{-x}sin(y) = e^{-x-iy} = e^{-alpha}. $$



      Therefore the claim follows. ////



      (A careful inespection shows that this construction produces all the solutions of $alpha = 1 - e^{-alpha}$ in the upper half-plane.)






      share|cite|improve this answer











      $endgroup$













      • $begingroup$
        I think you want $varphi(x)=x(1-log x)$ instead. It satisfies the same properties, and $$(1-x)e^x=(1-log(sin(y)/y))sin(y)/y.$$ Other than that, the real part $f(t)$ of $tmapsto e^{alpha t}$ will be $f(t)=e^{xt}cos(yt)$, a non-periodic solution of the original equation.
        $endgroup$
        – Questioner
        Jan 16 at 11:59












      • $begingroup$
        @Questioner, That's a nice catch! I will fix it :)
        $endgroup$
        – Sangchul Lee
        Jan 19 at 10:22














      6





      +50







      6





      +50



      6




      +50



      $begingroup$

      Here is a typical argument using characteristic equation:



      Let $alpha in mathbb{C}setminus{0}$ solve the equation $alpha = 1-e^{-alpha}$. One can indeed prove that such solution exists. One such solution is numerically given by $-2.08884 + 7.46149 i$. Then



      $$ int_{x-1}^{x} e^{alpha t} , mathrm{d}t = frac{1 - e^{-alpha}}{alpha} e^{alpha x} = e^{alpha x}, $$



      hence $f(x) = e^{alpha x}$ is one (complex-valued) solution of the equation



      $$ f(x) = int_{x-1}^{x} f(t) , mathrm{d}t tag{*}$$



      If one is interested in real-valued solutions only, then one can consider both the real part and the imaginary part of $e^{alpha x}$. In particular, this tells that there exists an analytic solution of $text{(*)}$ which is neither constant nor having real-period.





      Addendum. We prove the following claim:




      Claim. There exists a non-zero solution of $alpha = 1 - e^{-alpha}$ in $mathbb{C}$.




      Proof. We first note that $varphi(x) = x(1-log x)$ satisfies $varphi(0^+) = 0$ and $varphi(1) = 1$. Next, let $k$ be a positive integer. Then




      1. There exists $y in (2kpi, (2k+frac{1}{2})pi)$ such that $ varphi(sin(y)/y) = cos (y) $, by the intermediate-value theorem.


      2. Set $x = log(sin(y)/y)$.



      We claim that $ alpha = x + iy $ solves the equation. Indeed, it is clear that $ e^{-x}sin y = y $ holds. Moreover,



      $$ (1-x)e^x = varphi(sin(y)/y) = cos(y), $$



      and so, $1 - x = e^{-x}cos(y)$. Combining altogether,



      $$ 1 - alpha = 1 - x - iy = e^{-x}cos(y) - ie^{-x}sin(y) = e^{-x-iy} = e^{-alpha}. $$



      Therefore the claim follows. ////



      (A careful inespection shows that this construction produces all the solutions of $alpha = 1 - e^{-alpha}$ in the upper half-plane.)






      share|cite|improve this answer











      $endgroup$



      Here is a typical argument using characteristic equation:



      Let $alpha in mathbb{C}setminus{0}$ solve the equation $alpha = 1-e^{-alpha}$. One can indeed prove that such solution exists. One such solution is numerically given by $-2.08884 + 7.46149 i$. Then



      $$ int_{x-1}^{x} e^{alpha t} , mathrm{d}t = frac{1 - e^{-alpha}}{alpha} e^{alpha x} = e^{alpha x}, $$



      hence $f(x) = e^{alpha x}$ is one (complex-valued) solution of the equation



      $$ f(x) = int_{x-1}^{x} f(t) , mathrm{d}t tag{*}$$



      If one is interested in real-valued solutions only, then one can consider both the real part and the imaginary part of $e^{alpha x}$. In particular, this tells that there exists an analytic solution of $text{(*)}$ which is neither constant nor having real-period.





      Addendum. We prove the following claim:




      Claim. There exists a non-zero solution of $alpha = 1 - e^{-alpha}$ in $mathbb{C}$.




      Proof. We first note that $varphi(x) = x(1-log x)$ satisfies $varphi(0^+) = 0$ and $varphi(1) = 1$. Next, let $k$ be a positive integer. Then




      1. There exists $y in (2kpi, (2k+frac{1}{2})pi)$ such that $ varphi(sin(y)/y) = cos (y) $, by the intermediate-value theorem.


      2. Set $x = log(sin(y)/y)$.



      We claim that $ alpha = x + iy $ solves the equation. Indeed, it is clear that $ e^{-x}sin y = y $ holds. Moreover,



      $$ (1-x)e^x = varphi(sin(y)/y) = cos(y), $$



      and so, $1 - x = e^{-x}cos(y)$. Combining altogether,



      $$ 1 - alpha = 1 - x - iy = e^{-x}cos(y) - ie^{-x}sin(y) = e^{-x-iy} = e^{-alpha}. $$



      Therefore the claim follows. ////



      (A careful inespection shows that this construction produces all the solutions of $alpha = 1 - e^{-alpha}$ in the upper half-plane.)







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Jan 19 at 10:23

























      answered Jan 12 at 6:40









      Sangchul LeeSangchul Lee

      92.6k12167268




      92.6k12167268












      • $begingroup$
        I think you want $varphi(x)=x(1-log x)$ instead. It satisfies the same properties, and $$(1-x)e^x=(1-log(sin(y)/y))sin(y)/y.$$ Other than that, the real part $f(t)$ of $tmapsto e^{alpha t}$ will be $f(t)=e^{xt}cos(yt)$, a non-periodic solution of the original equation.
        $endgroup$
        – Questioner
        Jan 16 at 11:59












      • $begingroup$
        @Questioner, That's a nice catch! I will fix it :)
        $endgroup$
        – Sangchul Lee
        Jan 19 at 10:22


















      • $begingroup$
        I think you want $varphi(x)=x(1-log x)$ instead. It satisfies the same properties, and $$(1-x)e^x=(1-log(sin(y)/y))sin(y)/y.$$ Other than that, the real part $f(t)$ of $tmapsto e^{alpha t}$ will be $f(t)=e^{xt}cos(yt)$, a non-periodic solution of the original equation.
        $endgroup$
        – Questioner
        Jan 16 at 11:59












      • $begingroup$
        @Questioner, That's a nice catch! I will fix it :)
        $endgroup$
        – Sangchul Lee
        Jan 19 at 10:22
















      $begingroup$
      I think you want $varphi(x)=x(1-log x)$ instead. It satisfies the same properties, and $$(1-x)e^x=(1-log(sin(y)/y))sin(y)/y.$$ Other than that, the real part $f(t)$ of $tmapsto e^{alpha t}$ will be $f(t)=e^{xt}cos(yt)$, a non-periodic solution of the original equation.
      $endgroup$
      – Questioner
      Jan 16 at 11:59






      $begingroup$
      I think you want $varphi(x)=x(1-log x)$ instead. It satisfies the same properties, and $$(1-x)e^x=(1-log(sin(y)/y))sin(y)/y.$$ Other than that, the real part $f(t)$ of $tmapsto e^{alpha t}$ will be $f(t)=e^{xt}cos(yt)$, a non-periodic solution of the original equation.
      $endgroup$
      – Questioner
      Jan 16 at 11:59














      $begingroup$
      @Questioner, That's a nice catch! I will fix it :)
      $endgroup$
      – Sangchul Lee
      Jan 19 at 10:22




      $begingroup$
      @Questioner, That's a nice catch! I will fix it :)
      $endgroup$
      – Sangchul Lee
      Jan 19 at 10:22











      2












      $begingroup$

      This is a partial answer where I provided some properties for $f$ that complies with
      $$
      f(x)=int_{x-1}^xf(s),{rm d}s
      $$

      for all $xinmathbb{R}$. I will update this thread as I make any further progress.




      1. $fin C^{infty}(mathbb{R})$ if $fin L_{rm loc}^1(mathbb{R})$ ($f$ must be smooth if it is locally integrable).




      Proof. $forall,x_0inmathbb{R}$, $forall,xge x_0+1$, we have
      $$
      f(x)=int_{x_0}^xf(s),{rm d}s-int_{x_0}^{x-1}f(s),{rm d}s.
      $$

      Since $fin L_{rm loc}^1(mathbb{R})$, it follows that
      $$
      int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
      $$

      are absolutely continuous. Thus $f$ is absolutely continuous on $left(x_0+1,inftyright)$. The arbitrariness of $x_0$ implies that $f$ is absolutely continuous on $mathbb{R}$. Thus necessarily, $fin C(mathbb{R})$.



      Likewise, since $fin C(mathbb{R})$, it follows that
      $$
      int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
      $$

      are continuously differentiable, which leads to $fin C^1(mathbb{R})$.



      Repeat the above reasoning inductively, and we eventually obtain $fin C^{infty}(mathbb{R})$.$#$



      This conclusion suggests that, at least for a most general case, we shall only consider those $f$'s that are smooth on $mathbb{R}$.




      2. $f=0$ if $fin L^1(mathbb{R})$ ($f$ must be zero if it is integrable on $mathbb{R}$).




      Proof. Since $fin L^1(mathbb{R})$, it is obvious that
      $$
      int_{x-1}^xf(s),{rm d}s=int_{mathbb{R}}1_{left[0,1right]}(x-s)f(s),{rm d}s=left(1_{left[0,1right]}*fright)(x).
      $$

      Hence, the original relation is equivalent to the following convolution equation on $mathbb{R}$:
      $$
      f=1_{left[0,1right]}*f.
      $$



      Note that $fin L^1(mathbb{R})$, and its Fourier transform $hat{f}$ is well-defined. By the convolution theorem,
      $$
      hat{f}=widehat{1_{left[0,1right]}},hat{f}Longrightarrowleft(1-widehat{1_{left[0,1right]}}right)hat{f}=0.
      $$

      This implies that $hat{f}=hat{f}(xi)=0$ for all $xine 0$. Besides, the continuity of $hat{f}$ yields $hat{f}(0)=0$. Consequently, we have
      $$
      hat{f}=0iff f=0.#
      $$



      This conclusion suggests that any non-trivial solution to the original equation must be non-integrable on $mathbb{R}$, e.g., non-zero constants. Nevertheless, given that it is reasonable to assume $f$ to be locally integrable, the original equation can always be formulated in the convolution form $f=1_{left[0,1right]}*f$. Just note that $hat{f}$ is not defined and the convolution theorem no longer applies if $f$ is locally integrable but not integrable on $mathbb{R}$.




      3. $fequivtext{const}$ if $fin L_{rm loc}^1(mathbb{R})cap C_T(mathbb{R})$ ($f$ must be constant if it is locally integrable and $T$-periodic).




      Proof. Since $fin L_{rm loc}^1(mathbb{R})$, we have $fin C^{infty}(left[0,Tright])$. Thanks to the periodicity, $f$ observes its Fourier series on $left[0,Tright]$
      $$
      f(x)simsum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
      $$



      Since $fin C^{infty}(left[0,Tright])$, the Fourier series of $f$ converges absolutely and uniformly to $f$. Therefore, we have for one thing,
      $$
      f(x)=sum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
      $$

      For another,
      begin{align}
      int_{x-1}^xf(s),{rm d}s&=int_{x-1}^xsum_{ninmathbb{Z}}a_n,e^{frac{2ipi ns}{T}}{rm d}s\
      &=sum_{ninmathbb{Z}}a_nint_{x-1}^xe^{frac{2ipi ns}{T}}{rm d}s\
      &=sum_{ninmathbb{Z}}a_nfrac{1-e^{-theta_n}}{theta_n}e^{frac{2ipi nx}{T}},
      end{align}

      where $theta_n=2ipi n/T$, and $left(1-e^{-theta_0}right)/theta_0=1$ (this is defined so that the form of the series preserves; otherwise, one may carry out the integration separately for $n=0$ and $nne 0$, and may find the results identical).



      Thanks to this result, the original equation requires
      $$
      a_n=a_nfrac{1-e^{-theta_n}}{theta_n}iffleft(1-frac{1-e^{-theta_n}}{theta_n}right)a_n=0.
      $$

      Note that
      $$
      1-frac{1-e^{-z}}{z}=0
      $$

      yields only one solution on $imathbb{R}$ (the imaginary axis), i.e., $z=0$. Thus since $theta_nne 0$ for all $nne 0$, it is a must that $a_n=0$ for all $nne 0$. This leads to $f(x)=a_0$, i.e., $f$ is constant on $mathbb{R}$.$#$



      This conclusion suggests that any continuous periodic function that satisfies the original equation must be constant.



      [TBC...]



      [Following @Empy2's answer, I believe the existence of some non-periodic solution to the original equation. Yet as per the above properties, this solution has to be smooth and most likely unbounded. Trying the polynomial series $f(x)=sum_{n=0}^{infty}a_nx^n$ could be promising, but it leads to an infinite dimensional linear system, and its convergence also remains unknown, which challenges the commutativity of summation and integration...]






      share|cite|improve this answer











      $endgroup$


















        2












        $begingroup$

        This is a partial answer where I provided some properties for $f$ that complies with
        $$
        f(x)=int_{x-1}^xf(s),{rm d}s
        $$

        for all $xinmathbb{R}$. I will update this thread as I make any further progress.




        1. $fin C^{infty}(mathbb{R})$ if $fin L_{rm loc}^1(mathbb{R})$ ($f$ must be smooth if it is locally integrable).




        Proof. $forall,x_0inmathbb{R}$, $forall,xge x_0+1$, we have
        $$
        f(x)=int_{x_0}^xf(s),{rm d}s-int_{x_0}^{x-1}f(s),{rm d}s.
        $$

        Since $fin L_{rm loc}^1(mathbb{R})$, it follows that
        $$
        int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
        $$

        are absolutely continuous. Thus $f$ is absolutely continuous on $left(x_0+1,inftyright)$. The arbitrariness of $x_0$ implies that $f$ is absolutely continuous on $mathbb{R}$. Thus necessarily, $fin C(mathbb{R})$.



        Likewise, since $fin C(mathbb{R})$, it follows that
        $$
        int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
        $$

        are continuously differentiable, which leads to $fin C^1(mathbb{R})$.



        Repeat the above reasoning inductively, and we eventually obtain $fin C^{infty}(mathbb{R})$.$#$



        This conclusion suggests that, at least for a most general case, we shall only consider those $f$'s that are smooth on $mathbb{R}$.




        2. $f=0$ if $fin L^1(mathbb{R})$ ($f$ must be zero if it is integrable on $mathbb{R}$).




        Proof. Since $fin L^1(mathbb{R})$, it is obvious that
        $$
        int_{x-1}^xf(s),{rm d}s=int_{mathbb{R}}1_{left[0,1right]}(x-s)f(s),{rm d}s=left(1_{left[0,1right]}*fright)(x).
        $$

        Hence, the original relation is equivalent to the following convolution equation on $mathbb{R}$:
        $$
        f=1_{left[0,1right]}*f.
        $$



        Note that $fin L^1(mathbb{R})$, and its Fourier transform $hat{f}$ is well-defined. By the convolution theorem,
        $$
        hat{f}=widehat{1_{left[0,1right]}},hat{f}Longrightarrowleft(1-widehat{1_{left[0,1right]}}right)hat{f}=0.
        $$

        This implies that $hat{f}=hat{f}(xi)=0$ for all $xine 0$. Besides, the continuity of $hat{f}$ yields $hat{f}(0)=0$. Consequently, we have
        $$
        hat{f}=0iff f=0.#
        $$



        This conclusion suggests that any non-trivial solution to the original equation must be non-integrable on $mathbb{R}$, e.g., non-zero constants. Nevertheless, given that it is reasonable to assume $f$ to be locally integrable, the original equation can always be formulated in the convolution form $f=1_{left[0,1right]}*f$. Just note that $hat{f}$ is not defined and the convolution theorem no longer applies if $f$ is locally integrable but not integrable on $mathbb{R}$.




        3. $fequivtext{const}$ if $fin L_{rm loc}^1(mathbb{R})cap C_T(mathbb{R})$ ($f$ must be constant if it is locally integrable and $T$-periodic).




        Proof. Since $fin L_{rm loc}^1(mathbb{R})$, we have $fin C^{infty}(left[0,Tright])$. Thanks to the periodicity, $f$ observes its Fourier series on $left[0,Tright]$
        $$
        f(x)simsum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
        $$



        Since $fin C^{infty}(left[0,Tright])$, the Fourier series of $f$ converges absolutely and uniformly to $f$. Therefore, we have for one thing,
        $$
        f(x)=sum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
        $$

        For another,
        begin{align}
        int_{x-1}^xf(s),{rm d}s&=int_{x-1}^xsum_{ninmathbb{Z}}a_n,e^{frac{2ipi ns}{T}}{rm d}s\
        &=sum_{ninmathbb{Z}}a_nint_{x-1}^xe^{frac{2ipi ns}{T}}{rm d}s\
        &=sum_{ninmathbb{Z}}a_nfrac{1-e^{-theta_n}}{theta_n}e^{frac{2ipi nx}{T}},
        end{align}

        where $theta_n=2ipi n/T$, and $left(1-e^{-theta_0}right)/theta_0=1$ (this is defined so that the form of the series preserves; otherwise, one may carry out the integration separately for $n=0$ and $nne 0$, and may find the results identical).



        Thanks to this result, the original equation requires
        $$
        a_n=a_nfrac{1-e^{-theta_n}}{theta_n}iffleft(1-frac{1-e^{-theta_n}}{theta_n}right)a_n=0.
        $$

        Note that
        $$
        1-frac{1-e^{-z}}{z}=0
        $$

        yields only one solution on $imathbb{R}$ (the imaginary axis), i.e., $z=0$. Thus since $theta_nne 0$ for all $nne 0$, it is a must that $a_n=0$ for all $nne 0$. This leads to $f(x)=a_0$, i.e., $f$ is constant on $mathbb{R}$.$#$



        This conclusion suggests that any continuous periodic function that satisfies the original equation must be constant.



        [TBC...]



        [Following @Empy2's answer, I believe the existence of some non-periodic solution to the original equation. Yet as per the above properties, this solution has to be smooth and most likely unbounded. Trying the polynomial series $f(x)=sum_{n=0}^{infty}a_nx^n$ could be promising, but it leads to an infinite dimensional linear system, and its convergence also remains unknown, which challenges the commutativity of summation and integration...]






        share|cite|improve this answer











        $endgroup$
















          2












          2








          2





          $begingroup$

          This is a partial answer where I provided some properties for $f$ that complies with
          $$
          f(x)=int_{x-1}^xf(s),{rm d}s
          $$

          for all $xinmathbb{R}$. I will update this thread as I make any further progress.




          1. $fin C^{infty}(mathbb{R})$ if $fin L_{rm loc}^1(mathbb{R})$ ($f$ must be smooth if it is locally integrable).




          Proof. $forall,x_0inmathbb{R}$, $forall,xge x_0+1$, we have
          $$
          f(x)=int_{x_0}^xf(s),{rm d}s-int_{x_0}^{x-1}f(s),{rm d}s.
          $$

          Since $fin L_{rm loc}^1(mathbb{R})$, it follows that
          $$
          int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
          $$

          are absolutely continuous. Thus $f$ is absolutely continuous on $left(x_0+1,inftyright)$. The arbitrariness of $x_0$ implies that $f$ is absolutely continuous on $mathbb{R}$. Thus necessarily, $fin C(mathbb{R})$.



          Likewise, since $fin C(mathbb{R})$, it follows that
          $$
          int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
          $$

          are continuously differentiable, which leads to $fin C^1(mathbb{R})$.



          Repeat the above reasoning inductively, and we eventually obtain $fin C^{infty}(mathbb{R})$.$#$



          This conclusion suggests that, at least for a most general case, we shall only consider those $f$'s that are smooth on $mathbb{R}$.




          2. $f=0$ if $fin L^1(mathbb{R})$ ($f$ must be zero if it is integrable on $mathbb{R}$).




          Proof. Since $fin L^1(mathbb{R})$, it is obvious that
          $$
          int_{x-1}^xf(s),{rm d}s=int_{mathbb{R}}1_{left[0,1right]}(x-s)f(s),{rm d}s=left(1_{left[0,1right]}*fright)(x).
          $$

          Hence, the original relation is equivalent to the following convolution equation on $mathbb{R}$:
          $$
          f=1_{left[0,1right]}*f.
          $$



          Note that $fin L^1(mathbb{R})$, and its Fourier transform $hat{f}$ is well-defined. By the convolution theorem,
          $$
          hat{f}=widehat{1_{left[0,1right]}},hat{f}Longrightarrowleft(1-widehat{1_{left[0,1right]}}right)hat{f}=0.
          $$

          This implies that $hat{f}=hat{f}(xi)=0$ for all $xine 0$. Besides, the continuity of $hat{f}$ yields $hat{f}(0)=0$. Consequently, we have
          $$
          hat{f}=0iff f=0.#
          $$



          This conclusion suggests that any non-trivial solution to the original equation must be non-integrable on $mathbb{R}$, e.g., non-zero constants. Nevertheless, given that it is reasonable to assume $f$ to be locally integrable, the original equation can always be formulated in the convolution form $f=1_{left[0,1right]}*f$. Just note that $hat{f}$ is not defined and the convolution theorem no longer applies if $f$ is locally integrable but not integrable on $mathbb{R}$.




          3. $fequivtext{const}$ if $fin L_{rm loc}^1(mathbb{R})cap C_T(mathbb{R})$ ($f$ must be constant if it is locally integrable and $T$-periodic).




          Proof. Since $fin L_{rm loc}^1(mathbb{R})$, we have $fin C^{infty}(left[0,Tright])$. Thanks to the periodicity, $f$ observes its Fourier series on $left[0,Tright]$
          $$
          f(x)simsum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
          $$



          Since $fin C^{infty}(left[0,Tright])$, the Fourier series of $f$ converges absolutely and uniformly to $f$. Therefore, we have for one thing,
          $$
          f(x)=sum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
          $$

          For another,
          begin{align}
          int_{x-1}^xf(s),{rm d}s&=int_{x-1}^xsum_{ninmathbb{Z}}a_n,e^{frac{2ipi ns}{T}}{rm d}s\
          &=sum_{ninmathbb{Z}}a_nint_{x-1}^xe^{frac{2ipi ns}{T}}{rm d}s\
          &=sum_{ninmathbb{Z}}a_nfrac{1-e^{-theta_n}}{theta_n}e^{frac{2ipi nx}{T}},
          end{align}

          where $theta_n=2ipi n/T$, and $left(1-e^{-theta_0}right)/theta_0=1$ (this is defined so that the form of the series preserves; otherwise, one may carry out the integration separately for $n=0$ and $nne 0$, and may find the results identical).



          Thanks to this result, the original equation requires
          $$
          a_n=a_nfrac{1-e^{-theta_n}}{theta_n}iffleft(1-frac{1-e^{-theta_n}}{theta_n}right)a_n=0.
          $$

          Note that
          $$
          1-frac{1-e^{-z}}{z}=0
          $$

          yields only one solution on $imathbb{R}$ (the imaginary axis), i.e., $z=0$. Thus since $theta_nne 0$ for all $nne 0$, it is a must that $a_n=0$ for all $nne 0$. This leads to $f(x)=a_0$, i.e., $f$ is constant on $mathbb{R}$.$#$



          This conclusion suggests that any continuous periodic function that satisfies the original equation must be constant.



          [TBC...]



          [Following @Empy2's answer, I believe the existence of some non-periodic solution to the original equation. Yet as per the above properties, this solution has to be smooth and most likely unbounded. Trying the polynomial series $f(x)=sum_{n=0}^{infty}a_nx^n$ could be promising, but it leads to an infinite dimensional linear system, and its convergence also remains unknown, which challenges the commutativity of summation and integration...]






          share|cite|improve this answer











          $endgroup$



          This is a partial answer where I provided some properties for $f$ that complies with
          $$
          f(x)=int_{x-1}^xf(s),{rm d}s
          $$

          for all $xinmathbb{R}$. I will update this thread as I make any further progress.




          1. $fin C^{infty}(mathbb{R})$ if $fin L_{rm loc}^1(mathbb{R})$ ($f$ must be smooth if it is locally integrable).




          Proof. $forall,x_0inmathbb{R}$, $forall,xge x_0+1$, we have
          $$
          f(x)=int_{x_0}^xf(s),{rm d}s-int_{x_0}^{x-1}f(s),{rm d}s.
          $$

          Since $fin L_{rm loc}^1(mathbb{R})$, it follows that
          $$
          int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
          $$

          are absolutely continuous. Thus $f$ is absolutely continuous on $left(x_0+1,inftyright)$. The arbitrariness of $x_0$ implies that $f$ is absolutely continuous on $mathbb{R}$. Thus necessarily, $fin C(mathbb{R})$.



          Likewise, since $fin C(mathbb{R})$, it follows that
          $$
          int_{x_0}^xf(s),{rm d}squadtext{and}quadint_{x_0}^{x-1}f(s),{rm d}s
          $$

          are continuously differentiable, which leads to $fin C^1(mathbb{R})$.



          Repeat the above reasoning inductively, and we eventually obtain $fin C^{infty}(mathbb{R})$.$#$



          This conclusion suggests that, at least for a most general case, we shall only consider those $f$'s that are smooth on $mathbb{R}$.




          2. $f=0$ if $fin L^1(mathbb{R})$ ($f$ must be zero if it is integrable on $mathbb{R}$).




          Proof. Since $fin L^1(mathbb{R})$, it is obvious that
          $$
          int_{x-1}^xf(s),{rm d}s=int_{mathbb{R}}1_{left[0,1right]}(x-s)f(s),{rm d}s=left(1_{left[0,1right]}*fright)(x).
          $$

          Hence, the original relation is equivalent to the following convolution equation on $mathbb{R}$:
          $$
          f=1_{left[0,1right]}*f.
          $$



          Note that $fin L^1(mathbb{R})$, and its Fourier transform $hat{f}$ is well-defined. By the convolution theorem,
          $$
          hat{f}=widehat{1_{left[0,1right]}},hat{f}Longrightarrowleft(1-widehat{1_{left[0,1right]}}right)hat{f}=0.
          $$

          This implies that $hat{f}=hat{f}(xi)=0$ for all $xine 0$. Besides, the continuity of $hat{f}$ yields $hat{f}(0)=0$. Consequently, we have
          $$
          hat{f}=0iff f=0.#
          $$



          This conclusion suggests that any non-trivial solution to the original equation must be non-integrable on $mathbb{R}$, e.g., non-zero constants. Nevertheless, given that it is reasonable to assume $f$ to be locally integrable, the original equation can always be formulated in the convolution form $f=1_{left[0,1right]}*f$. Just note that $hat{f}$ is not defined and the convolution theorem no longer applies if $f$ is locally integrable but not integrable on $mathbb{R}$.




          3. $fequivtext{const}$ if $fin L_{rm loc}^1(mathbb{R})cap C_T(mathbb{R})$ ($f$ must be constant if it is locally integrable and $T$-periodic).




          Proof. Since $fin L_{rm loc}^1(mathbb{R})$, we have $fin C^{infty}(left[0,Tright])$. Thanks to the periodicity, $f$ observes its Fourier series on $left[0,Tright]$
          $$
          f(x)simsum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
          $$



          Since $fin C^{infty}(left[0,Tright])$, the Fourier series of $f$ converges absolutely and uniformly to $f$. Therefore, we have for one thing,
          $$
          f(x)=sum_{ninmathbb{Z}}a_n,e^{frac{2ipi nx}{T}}.
          $$

          For another,
          begin{align}
          int_{x-1}^xf(s),{rm d}s&=int_{x-1}^xsum_{ninmathbb{Z}}a_n,e^{frac{2ipi ns}{T}}{rm d}s\
          &=sum_{ninmathbb{Z}}a_nint_{x-1}^xe^{frac{2ipi ns}{T}}{rm d}s\
          &=sum_{ninmathbb{Z}}a_nfrac{1-e^{-theta_n}}{theta_n}e^{frac{2ipi nx}{T}},
          end{align}

          where $theta_n=2ipi n/T$, and $left(1-e^{-theta_0}right)/theta_0=1$ (this is defined so that the form of the series preserves; otherwise, one may carry out the integration separately for $n=0$ and $nne 0$, and may find the results identical).



          Thanks to this result, the original equation requires
          $$
          a_n=a_nfrac{1-e^{-theta_n}}{theta_n}iffleft(1-frac{1-e^{-theta_n}}{theta_n}right)a_n=0.
          $$

          Note that
          $$
          1-frac{1-e^{-z}}{z}=0
          $$

          yields only one solution on $imathbb{R}$ (the imaginary axis), i.e., $z=0$. Thus since $theta_nne 0$ for all $nne 0$, it is a must that $a_n=0$ for all $nne 0$. This leads to $f(x)=a_0$, i.e., $f$ is constant on $mathbb{R}$.$#$



          This conclusion suggests that any continuous periodic function that satisfies the original equation must be constant.



          [TBC...]



          [Following @Empy2's answer, I believe the existence of some non-periodic solution to the original equation. Yet as per the above properties, this solution has to be smooth and most likely unbounded. Trying the polynomial series $f(x)=sum_{n=0}^{infty}a_nx^n$ could be promising, but it leads to an infinite dimensional linear system, and its convergence also remains unknown, which challenges the commutativity of summation and integration...]







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Jan 13 at 0:31

























          answered Jan 12 at 4:17









          hypernovahypernova

          4,534313




          4,534313























              1












              $begingroup$

              Define $f(x)=3x^2-4x+1$ for $xin(0,1)$ so the differential equation is true for $x=1$.

              For $xin(1,2)$, solve the differential equation
              $$frac{df}{dx}=f(x)-(3(x-1)^2-4(x-1)+1)$$
              Iterate the procedure, for $xin (2,3)$ and so on.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                That's nice. However, how do we define $f(x)$ for $x<0$? Also, I have not proved yet that the equation $f'(x)=f(x)-f(x-1)$ implies that $f$ satisfies the property described above.
                $endgroup$
                – Questioner
                Jan 7 at 9:09










              • $begingroup$
                Nevermind the first part of the comment above: For $xin[-1,0]$ we have $f'(x+1)=f(x+1)-f(x)$, so this defines $f$ on $[-1,1]$ and we iterate...
                $endgroup$
                – Questioner
                Jan 7 at 9:13










              • $begingroup$
                @Questioner Note that$$f(1)=int_0^1f(t),mathrm dt=0,$$thus integrating $f'(t)=f(t)-f(t-1)$ for $tin[1,x]$ yields$$f(x)=int_1^xf(t),mathrm dt-int_0^{x-1}f(t),mathrm dt=int_{x-1}^xf(t),mathrm dt$$for $xgeqslant1$.
                $endgroup$
                – Saad
                Jan 12 at 2:09


















              1












              $begingroup$

              Define $f(x)=3x^2-4x+1$ for $xin(0,1)$ so the differential equation is true for $x=1$.

              For $xin(1,2)$, solve the differential equation
              $$frac{df}{dx}=f(x)-(3(x-1)^2-4(x-1)+1)$$
              Iterate the procedure, for $xin (2,3)$ and so on.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                That's nice. However, how do we define $f(x)$ for $x<0$? Also, I have not proved yet that the equation $f'(x)=f(x)-f(x-1)$ implies that $f$ satisfies the property described above.
                $endgroup$
                – Questioner
                Jan 7 at 9:09










              • $begingroup$
                Nevermind the first part of the comment above: For $xin[-1,0]$ we have $f'(x+1)=f(x+1)-f(x)$, so this defines $f$ on $[-1,1]$ and we iterate...
                $endgroup$
                – Questioner
                Jan 7 at 9:13










              • $begingroup$
                @Questioner Note that$$f(1)=int_0^1f(t),mathrm dt=0,$$thus integrating $f'(t)=f(t)-f(t-1)$ for $tin[1,x]$ yields$$f(x)=int_1^xf(t),mathrm dt-int_0^{x-1}f(t),mathrm dt=int_{x-1}^xf(t),mathrm dt$$for $xgeqslant1$.
                $endgroup$
                – Saad
                Jan 12 at 2:09
















              1












              1








              1





              $begingroup$

              Define $f(x)=3x^2-4x+1$ for $xin(0,1)$ so the differential equation is true for $x=1$.

              For $xin(1,2)$, solve the differential equation
              $$frac{df}{dx}=f(x)-(3(x-1)^2-4(x-1)+1)$$
              Iterate the procedure, for $xin (2,3)$ and so on.






              share|cite|improve this answer









              $endgroup$



              Define $f(x)=3x^2-4x+1$ for $xin(0,1)$ so the differential equation is true for $x=1$.

              For $xin(1,2)$, solve the differential equation
              $$frac{df}{dx}=f(x)-(3(x-1)^2-4(x-1)+1)$$
              Iterate the procedure, for $xin (2,3)$ and so on.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Jan 7 at 9:02









              Empy2Empy2

              33.5k12261




              33.5k12261












              • $begingroup$
                That's nice. However, how do we define $f(x)$ for $x<0$? Also, I have not proved yet that the equation $f'(x)=f(x)-f(x-1)$ implies that $f$ satisfies the property described above.
                $endgroup$
                – Questioner
                Jan 7 at 9:09










              • $begingroup$
                Nevermind the first part of the comment above: For $xin[-1,0]$ we have $f'(x+1)=f(x+1)-f(x)$, so this defines $f$ on $[-1,1]$ and we iterate...
                $endgroup$
                – Questioner
                Jan 7 at 9:13










              • $begingroup$
                @Questioner Note that$$f(1)=int_0^1f(t),mathrm dt=0,$$thus integrating $f'(t)=f(t)-f(t-1)$ for $tin[1,x]$ yields$$f(x)=int_1^xf(t),mathrm dt-int_0^{x-1}f(t),mathrm dt=int_{x-1}^xf(t),mathrm dt$$for $xgeqslant1$.
                $endgroup$
                – Saad
                Jan 12 at 2:09




















              • $begingroup$
                That's nice. However, how do we define $f(x)$ for $x<0$? Also, I have not proved yet that the equation $f'(x)=f(x)-f(x-1)$ implies that $f$ satisfies the property described above.
                $endgroup$
                – Questioner
                Jan 7 at 9:09










              • $begingroup$
                Nevermind the first part of the comment above: For $xin[-1,0]$ we have $f'(x+1)=f(x+1)-f(x)$, so this defines $f$ on $[-1,1]$ and we iterate...
                $endgroup$
                – Questioner
                Jan 7 at 9:13










              • $begingroup$
                @Questioner Note that$$f(1)=int_0^1f(t),mathrm dt=0,$$thus integrating $f'(t)=f(t)-f(t-1)$ for $tin[1,x]$ yields$$f(x)=int_1^xf(t),mathrm dt-int_0^{x-1}f(t),mathrm dt=int_{x-1}^xf(t),mathrm dt$$for $xgeqslant1$.
                $endgroup$
                – Saad
                Jan 12 at 2:09


















              $begingroup$
              That's nice. However, how do we define $f(x)$ for $x<0$? Also, I have not proved yet that the equation $f'(x)=f(x)-f(x-1)$ implies that $f$ satisfies the property described above.
              $endgroup$
              – Questioner
              Jan 7 at 9:09




              $begingroup$
              That's nice. However, how do we define $f(x)$ for $x<0$? Also, I have not proved yet that the equation $f'(x)=f(x)-f(x-1)$ implies that $f$ satisfies the property described above.
              $endgroup$
              – Questioner
              Jan 7 at 9:09












              $begingroup$
              Nevermind the first part of the comment above: For $xin[-1,0]$ we have $f'(x+1)=f(x+1)-f(x)$, so this defines $f$ on $[-1,1]$ and we iterate...
              $endgroup$
              – Questioner
              Jan 7 at 9:13




              $begingroup$
              Nevermind the first part of the comment above: For $xin[-1,0]$ we have $f'(x+1)=f(x+1)-f(x)$, so this defines $f$ on $[-1,1]$ and we iterate...
              $endgroup$
              – Questioner
              Jan 7 at 9:13












              $begingroup$
              @Questioner Note that$$f(1)=int_0^1f(t),mathrm dt=0,$$thus integrating $f'(t)=f(t)-f(t-1)$ for $tin[1,x]$ yields$$f(x)=int_1^xf(t),mathrm dt-int_0^{x-1}f(t),mathrm dt=int_{x-1}^xf(t),mathrm dt$$for $xgeqslant1$.
              $endgroup$
              – Saad
              Jan 12 at 2:09






              $begingroup$
              @Questioner Note that$$f(1)=int_0^1f(t),mathrm dt=0,$$thus integrating $f'(t)=f(t)-f(t-1)$ for $tin[1,x]$ yields$$f(x)=int_1^xf(t),mathrm dt-int_0^{x-1}f(t),mathrm dt=int_{x-1}^xf(t),mathrm dt$$for $xgeqslant1$.
              $endgroup$
              – Saad
              Jan 12 at 2:09













              -1












              $begingroup$

              Let
              $$f(x) = dfrac{a_0}2+sumlimits_{n=1}^inftyleft(a_ncos2pi nx + a_nsin2pi nxright),$$
              then
              $$intlimits_{x-1}^x f(x),mathrm dx = dfrac {a_0}2,$$
              so $f(x)$ is a constant.






              share|cite|improve this answer









              $endgroup$


















                -1












                $begingroup$

                Let
                $$f(x) = dfrac{a_0}2+sumlimits_{n=1}^inftyleft(a_ncos2pi nx + a_nsin2pi nxright),$$
                then
                $$intlimits_{x-1}^x f(x),mathrm dx = dfrac {a_0}2,$$
                so $f(x)$ is a constant.






                share|cite|improve this answer









                $endgroup$
















                  -1












                  -1








                  -1





                  $begingroup$

                  Let
                  $$f(x) = dfrac{a_0}2+sumlimits_{n=1}^inftyleft(a_ncos2pi nx + a_nsin2pi nxright),$$
                  then
                  $$intlimits_{x-1}^x f(x),mathrm dx = dfrac {a_0}2,$$
                  so $f(x)$ is a constant.






                  share|cite|improve this answer









                  $endgroup$



                  Let
                  $$f(x) = dfrac{a_0}2+sumlimits_{n=1}^inftyleft(a_ncos2pi nx + a_nsin2pi nxright),$$
                  then
                  $$intlimits_{x-1}^x f(x),mathrm dx = dfrac {a_0}2,$$
                  so $f(x)$ is a constant.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jan 12 at 0:10









                  Yuri NegometyanovYuri Negometyanov

                  11.1k1728




                  11.1k1728






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3064768%2fif-fx-int-x-1x-fsds-is-f-constant-periodic%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      MongoDB - Not Authorized To Execute Command

                      How to fix TextFormField cause rebuild widget in Flutter

                      in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith