Alternative proof for linear independence of n solutions of a nth-order ordinary differential equation












0












$begingroup$


I am a beginner with differential equations and I came up with the definition of existence and uniqueness theorem for an $n$th-order differential equation in a book which I am referring to. For this question to be precise I must give the definition of the existence and uniqueness theorem for a nth-order ordinary differential equation.




Definition :
Let $L(y)(x)=y^{(n)}(x)+p_{1}(x)y^{(n-1)}+...+p_{n}y(x)=0,xin I$ be a $n$th-order ODE where $p_{1},p_{2}...p_{n} $ be defined on the interval $I$ which consists of a point $x_{0}$ and $a_{0},a_{1}...a_{n-1}$ be $n$ constants. Then, there exists a unique solution $phi$ on $I$ of the $n$th-order ODE given above satisfying the initial conditions : $phi(x_{0})=a_{0} , phi'(x_{0})=a_{1} , ... , phi^{(n-1)}(x_{0})=a_{(n-1)}$.




A note is also given as




Note : Suppose that $phi_{1}(x),...,phi_{n}(x)$ are $n$ solutions of $L(y)(x)=0$ given above and suppose that $c_{1},c_{2},...,c_{n}$ are $n$ arbitrary constants. Since $L(phi_{1})=L(phi_{2})=...=L(phi_{n})=0$ where $L$ is a linear operator, hence we have $$L(c_{1}phi_{1}+c_{2}phi_{2}+...+c_{n}phi_{n})=c_{1}L(phi_{1})+...+c_{n}L(phi_{n})=0.$$



In case the $n$ solutions are linearly independent then $$c_{1}phi_{1}+...+c_{n}phi_{n}=0,~~ xin I implies c_{1}=c_{2}=...=c_{n}=0$$




Next the question comes. It is given that we have to prove that for a third order ODE $y'''+p_{1}(x)y''+p_{2}(x)y'+p_{3}(x)y=0$ has a three linearly independent solutions for $xin I$ and $p_{1}, p_{2}, p_{3}$ are continuous functions on the interval $I$.



To prove this the author says that



Using the existence and uniqueness theorem for nth-order ODE stated above, we conclude that there exists solutions $phi_{1}(x),phi_{2}(x),phi_{3}(x)$ of the given ODE such that for $x_{0}epsilon I$



$phi_{1}(x_{0})=0, phi_{1}^{'}(x_{0})=0,phi_{1}^{''}(x_{0})=0 $



$phi_{2}(x_{0})=0, phi_{2}^{'}(x_{0})=1,phi_{2}^{''}(x_{0})=0 $



$phi_{3}(x_{0})=0, phi_{3}^{'}(x_{0})=0,phi_{3}^{''}(x_{0})=1 $



and then the author further proceeded with his proof for the question. There are two thing I did not understand here. The first one is the linear operator $L$ part in the definition [ how $L(phi_{1})=...=L(phi_{n})=0$ ? what kind of linear operator is this? any example? ] and second how did the author get to this conclusion from existence and uniqueness theorem?



I also checked the wronskian and that is identically 0










share|cite|improve this question











$endgroup$

















    0












    $begingroup$


    I am a beginner with differential equations and I came up with the definition of existence and uniqueness theorem for an $n$th-order differential equation in a book which I am referring to. For this question to be precise I must give the definition of the existence and uniqueness theorem for a nth-order ordinary differential equation.




    Definition :
    Let $L(y)(x)=y^{(n)}(x)+p_{1}(x)y^{(n-1)}+...+p_{n}y(x)=0,xin I$ be a $n$th-order ODE where $p_{1},p_{2}...p_{n} $ be defined on the interval $I$ which consists of a point $x_{0}$ and $a_{0},a_{1}...a_{n-1}$ be $n$ constants. Then, there exists a unique solution $phi$ on $I$ of the $n$th-order ODE given above satisfying the initial conditions : $phi(x_{0})=a_{0} , phi'(x_{0})=a_{1} , ... , phi^{(n-1)}(x_{0})=a_{(n-1)}$.




    A note is also given as




    Note : Suppose that $phi_{1}(x),...,phi_{n}(x)$ are $n$ solutions of $L(y)(x)=0$ given above and suppose that $c_{1},c_{2},...,c_{n}$ are $n$ arbitrary constants. Since $L(phi_{1})=L(phi_{2})=...=L(phi_{n})=0$ where $L$ is a linear operator, hence we have $$L(c_{1}phi_{1}+c_{2}phi_{2}+...+c_{n}phi_{n})=c_{1}L(phi_{1})+...+c_{n}L(phi_{n})=0.$$



    In case the $n$ solutions are linearly independent then $$c_{1}phi_{1}+...+c_{n}phi_{n}=0,~~ xin I implies c_{1}=c_{2}=...=c_{n}=0$$




    Next the question comes. It is given that we have to prove that for a third order ODE $y'''+p_{1}(x)y''+p_{2}(x)y'+p_{3}(x)y=0$ has a three linearly independent solutions for $xin I$ and $p_{1}, p_{2}, p_{3}$ are continuous functions on the interval $I$.



    To prove this the author says that



    Using the existence and uniqueness theorem for nth-order ODE stated above, we conclude that there exists solutions $phi_{1}(x),phi_{2}(x),phi_{3}(x)$ of the given ODE such that for $x_{0}epsilon I$



    $phi_{1}(x_{0})=0, phi_{1}^{'}(x_{0})=0,phi_{1}^{''}(x_{0})=0 $



    $phi_{2}(x_{0})=0, phi_{2}^{'}(x_{0})=1,phi_{2}^{''}(x_{0})=0 $



    $phi_{3}(x_{0})=0, phi_{3}^{'}(x_{0})=0,phi_{3}^{''}(x_{0})=1 $



    and then the author further proceeded with his proof for the question. There are two thing I did not understand here. The first one is the linear operator $L$ part in the definition [ how $L(phi_{1})=...=L(phi_{n})=0$ ? what kind of linear operator is this? any example? ] and second how did the author get to this conclusion from existence and uniqueness theorem?



    I also checked the wronskian and that is identically 0










    share|cite|improve this question











    $endgroup$















      0












      0








      0





      $begingroup$


      I am a beginner with differential equations and I came up with the definition of existence and uniqueness theorem for an $n$th-order differential equation in a book which I am referring to. For this question to be precise I must give the definition of the existence and uniqueness theorem for a nth-order ordinary differential equation.




      Definition :
      Let $L(y)(x)=y^{(n)}(x)+p_{1}(x)y^{(n-1)}+...+p_{n}y(x)=0,xin I$ be a $n$th-order ODE where $p_{1},p_{2}...p_{n} $ be defined on the interval $I$ which consists of a point $x_{0}$ and $a_{0},a_{1}...a_{n-1}$ be $n$ constants. Then, there exists a unique solution $phi$ on $I$ of the $n$th-order ODE given above satisfying the initial conditions : $phi(x_{0})=a_{0} , phi'(x_{0})=a_{1} , ... , phi^{(n-1)}(x_{0})=a_{(n-1)}$.




      A note is also given as




      Note : Suppose that $phi_{1}(x),...,phi_{n}(x)$ are $n$ solutions of $L(y)(x)=0$ given above and suppose that $c_{1},c_{2},...,c_{n}$ are $n$ arbitrary constants. Since $L(phi_{1})=L(phi_{2})=...=L(phi_{n})=0$ where $L$ is a linear operator, hence we have $$L(c_{1}phi_{1}+c_{2}phi_{2}+...+c_{n}phi_{n})=c_{1}L(phi_{1})+...+c_{n}L(phi_{n})=0.$$



      In case the $n$ solutions are linearly independent then $$c_{1}phi_{1}+...+c_{n}phi_{n}=0,~~ xin I implies c_{1}=c_{2}=...=c_{n}=0$$




      Next the question comes. It is given that we have to prove that for a third order ODE $y'''+p_{1}(x)y''+p_{2}(x)y'+p_{3}(x)y=0$ has a three linearly independent solutions for $xin I$ and $p_{1}, p_{2}, p_{3}$ are continuous functions on the interval $I$.



      To prove this the author says that



      Using the existence and uniqueness theorem for nth-order ODE stated above, we conclude that there exists solutions $phi_{1}(x),phi_{2}(x),phi_{3}(x)$ of the given ODE such that for $x_{0}epsilon I$



      $phi_{1}(x_{0})=0, phi_{1}^{'}(x_{0})=0,phi_{1}^{''}(x_{0})=0 $



      $phi_{2}(x_{0})=0, phi_{2}^{'}(x_{0})=1,phi_{2}^{''}(x_{0})=0 $



      $phi_{3}(x_{0})=0, phi_{3}^{'}(x_{0})=0,phi_{3}^{''}(x_{0})=1 $



      and then the author further proceeded with his proof for the question. There are two thing I did not understand here. The first one is the linear operator $L$ part in the definition [ how $L(phi_{1})=...=L(phi_{n})=0$ ? what kind of linear operator is this? any example? ] and second how did the author get to this conclusion from existence and uniqueness theorem?



      I also checked the wronskian and that is identically 0










      share|cite|improve this question











      $endgroup$




      I am a beginner with differential equations and I came up with the definition of existence and uniqueness theorem for an $n$th-order differential equation in a book which I am referring to. For this question to be precise I must give the definition of the existence and uniqueness theorem for a nth-order ordinary differential equation.




      Definition :
      Let $L(y)(x)=y^{(n)}(x)+p_{1}(x)y^{(n-1)}+...+p_{n}y(x)=0,xin I$ be a $n$th-order ODE where $p_{1},p_{2}...p_{n} $ be defined on the interval $I$ which consists of a point $x_{0}$ and $a_{0},a_{1}...a_{n-1}$ be $n$ constants. Then, there exists a unique solution $phi$ on $I$ of the $n$th-order ODE given above satisfying the initial conditions : $phi(x_{0})=a_{0} , phi'(x_{0})=a_{1} , ... , phi^{(n-1)}(x_{0})=a_{(n-1)}$.




      A note is also given as




      Note : Suppose that $phi_{1}(x),...,phi_{n}(x)$ are $n$ solutions of $L(y)(x)=0$ given above and suppose that $c_{1},c_{2},...,c_{n}$ are $n$ arbitrary constants. Since $L(phi_{1})=L(phi_{2})=...=L(phi_{n})=0$ where $L$ is a linear operator, hence we have $$L(c_{1}phi_{1}+c_{2}phi_{2}+...+c_{n}phi_{n})=c_{1}L(phi_{1})+...+c_{n}L(phi_{n})=0.$$



      In case the $n$ solutions are linearly independent then $$c_{1}phi_{1}+...+c_{n}phi_{n}=0,~~ xin I implies c_{1}=c_{2}=...=c_{n}=0$$




      Next the question comes. It is given that we have to prove that for a third order ODE $y'''+p_{1}(x)y''+p_{2}(x)y'+p_{3}(x)y=0$ has a three linearly independent solutions for $xin I$ and $p_{1}, p_{2}, p_{3}$ are continuous functions on the interval $I$.



      To prove this the author says that



      Using the existence and uniqueness theorem for nth-order ODE stated above, we conclude that there exists solutions $phi_{1}(x),phi_{2}(x),phi_{3}(x)$ of the given ODE such that for $x_{0}epsilon I$



      $phi_{1}(x_{0})=0, phi_{1}^{'}(x_{0})=0,phi_{1}^{''}(x_{0})=0 $



      $phi_{2}(x_{0})=0, phi_{2}^{'}(x_{0})=1,phi_{2}^{''}(x_{0})=0 $



      $phi_{3}(x_{0})=0, phi_{3}^{'}(x_{0})=0,phi_{3}^{''}(x_{0})=1 $



      and then the author further proceeded with his proof for the question. There are two thing I did not understand here. The first one is the linear operator $L$ part in the definition [ how $L(phi_{1})=...=L(phi_{n})=0$ ? what kind of linear operator is this? any example? ] and second how did the author get to this conclusion from existence and uniqueness theorem?



      I also checked the wronskian and that is identically 0







      ordinary-differential-equations alternative-proof






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 31 at 14:18









      LutzL

      59.7k42057




      59.7k42057










      asked Jan 26 at 7:17









      Siddharth MishraSiddharth Mishra

      186




      186






















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$


          The first one is the linear operator $L$ part in the definition [ how $Lphi_1=Lphi_2=cdots=Lphi_n=0$? what kind of linear operator is this? any example? ]




          We are working on a space of functions - say, all real-valued functions that are infinitely differentiable on a certain interval. This is, of course, a vector space over $mathbb{R}$. The derivative $D$ ($D(f)=f'$) and higher derivatives $D^k(f)=f^{(k)}$ are linear operators from this space to itself. So are multiplication by functions of $x$ $M_{p_i}$ given by $M_{p_i}(y)(x) = p_i(x)y(x)$. Composing these linear operators, as in $y(x)to p_i(x)y^{(n-i)}(x)$, also gives linear operators. Then we add them up and get $L$, the linear operator that takes $y(x)$ to $y^{(n)}(x)+p_1(x)y^{(n-i)}(x)+cdots + p_{n-1}(x)y'(x)+p_n(x)y(x)$.



          That's the point; every linear differential equation is of the form $Ly=0$ for some linear operator $L$, which is defined with a combination of derivative operators, multiplication by functions, and addition.




          ...and second how did the author get to this conclusion from existence and uniqueness theorem?




          We choose our base point $x_0$ for the initial conditions. Then, there is a linear map $X$ from our space of functions to $mathbb{R}^n$ there - we take $f$ to the vector $(f(x_0),f'(x_0),dots,f^{(n-1)}(x_0))$. If the functions $f_1,f_2,dots,f_m$ are linearly dependent with the relation $c_1f_1+c_2f_2+cdots+c_mf_m=0$, then their images $X(f_j)$ satisfy the same dependence relation $c_1X(f_1)+c_2X(f_2)+cdots+c_mX(f_m)=0$. By the contrapositive, if the $X(f_j)$ are linearly independent, so are the $f_j$. (This is a standard fact of linear algebra)



          But then, the existence-uniqueness theorem (existence half) says we can find functions $phi_i$ satisfying the differential equation with $phi_i^{(i-1)}(x_0)=1$ and $phi^{(k)}(x_0)=0$ for each other $k$ in ${0,1,dots,n-1}$. Under our linear map $X$, these $phi_i$ map to a linearly independent set - the standard basis of $mathbb{R}^n$. Pulling back, the $phi_i$ must also be linearly independent.



          Where does the uniqueness half of the existence-uniqueness theorem come in? That tells us that we can't have more than $n$ linearly independent solutions; if we did, we would have a nonzero solution $phi_{n+1}$ with $phi_{n+1}(x_0)=phi_{n+1}'(x_0)=cdots=phi_{n+1}^{(n-1)}(x_0)=0$, and uniqueness rules that out.




          I also checked the wronskian and that is identically 0




          It shouldn't be. It would help to fix the typo there - it should be $phi_1(x_0)=1$. Also, the leading term in the definition of $L$ should be $y^{(n)}$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I understand that from existence theorem we can take the initial value to be 1 but can't we take random variables like k1, k2 .... kn
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:09












          • $begingroup$
            what i mean is that assuming the value(s) to be 1 or 0 is not that much satisfactory
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:16










          • $begingroup$
            Have you read what the existence-uniqueness theorem says? You wrote it down, after all. We can choose for our initial conditions any list of values for $f$ and its first $n-1$ derivatives at the starting point. A list that's all zeros except for a single $1$ is certainly included. Sure, we could take other values, but we don't need to.
            $endgroup$
            – jmerry
            Jan 26 at 10:23










          • $begingroup$
            It says that the initial value has a solution, not that you can choose any constant. Don't you think that this proof will work only when the above condition is satisfied. If not then why not take arbitrary constants again and show the proof. If yes... then we can prove theorems like pythagoras and others using the same technique... again, the definition said that the initial value problem has a solution not that we can take any constants to prove the theorem. Otherwise if there is some other meaning then please explain.
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:51








          • 1




            $begingroup$
            The theorem says that every initial value problem has a solution. It says that the initial value problem $y(x_0)=1,y'(x_0)=0,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=1,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=0,y''(x_0)=1,dots,y^{(n-1)}(x_0)$ has a solution. And so on, for all $n$ of them. We are simply applying what the theorem says, not inventing anything new.
            $endgroup$
            – jmerry
            Jan 26 at 11:27











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3087984%2falternative-proof-for-linear-independence-of-n-solutions-of-a-nth-order-ordinary%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$


          The first one is the linear operator $L$ part in the definition [ how $Lphi_1=Lphi_2=cdots=Lphi_n=0$? what kind of linear operator is this? any example? ]




          We are working on a space of functions - say, all real-valued functions that are infinitely differentiable on a certain interval. This is, of course, a vector space over $mathbb{R}$. The derivative $D$ ($D(f)=f'$) and higher derivatives $D^k(f)=f^{(k)}$ are linear operators from this space to itself. So are multiplication by functions of $x$ $M_{p_i}$ given by $M_{p_i}(y)(x) = p_i(x)y(x)$. Composing these linear operators, as in $y(x)to p_i(x)y^{(n-i)}(x)$, also gives linear operators. Then we add them up and get $L$, the linear operator that takes $y(x)$ to $y^{(n)}(x)+p_1(x)y^{(n-i)}(x)+cdots + p_{n-1}(x)y'(x)+p_n(x)y(x)$.



          That's the point; every linear differential equation is of the form $Ly=0$ for some linear operator $L$, which is defined with a combination of derivative operators, multiplication by functions, and addition.




          ...and second how did the author get to this conclusion from existence and uniqueness theorem?




          We choose our base point $x_0$ for the initial conditions. Then, there is a linear map $X$ from our space of functions to $mathbb{R}^n$ there - we take $f$ to the vector $(f(x_0),f'(x_0),dots,f^{(n-1)}(x_0))$. If the functions $f_1,f_2,dots,f_m$ are linearly dependent with the relation $c_1f_1+c_2f_2+cdots+c_mf_m=0$, then their images $X(f_j)$ satisfy the same dependence relation $c_1X(f_1)+c_2X(f_2)+cdots+c_mX(f_m)=0$. By the contrapositive, if the $X(f_j)$ are linearly independent, so are the $f_j$. (This is a standard fact of linear algebra)



          But then, the existence-uniqueness theorem (existence half) says we can find functions $phi_i$ satisfying the differential equation with $phi_i^{(i-1)}(x_0)=1$ and $phi^{(k)}(x_0)=0$ for each other $k$ in ${0,1,dots,n-1}$. Under our linear map $X$, these $phi_i$ map to a linearly independent set - the standard basis of $mathbb{R}^n$. Pulling back, the $phi_i$ must also be linearly independent.



          Where does the uniqueness half of the existence-uniqueness theorem come in? That tells us that we can't have more than $n$ linearly independent solutions; if we did, we would have a nonzero solution $phi_{n+1}$ with $phi_{n+1}(x_0)=phi_{n+1}'(x_0)=cdots=phi_{n+1}^{(n-1)}(x_0)=0$, and uniqueness rules that out.




          I also checked the wronskian and that is identically 0




          It shouldn't be. It would help to fix the typo there - it should be $phi_1(x_0)=1$. Also, the leading term in the definition of $L$ should be $y^{(n)}$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I understand that from existence theorem we can take the initial value to be 1 but can't we take random variables like k1, k2 .... kn
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:09












          • $begingroup$
            what i mean is that assuming the value(s) to be 1 or 0 is not that much satisfactory
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:16










          • $begingroup$
            Have you read what the existence-uniqueness theorem says? You wrote it down, after all. We can choose for our initial conditions any list of values for $f$ and its first $n-1$ derivatives at the starting point. A list that's all zeros except for a single $1$ is certainly included. Sure, we could take other values, but we don't need to.
            $endgroup$
            – jmerry
            Jan 26 at 10:23










          • $begingroup$
            It says that the initial value has a solution, not that you can choose any constant. Don't you think that this proof will work only when the above condition is satisfied. If not then why not take arbitrary constants again and show the proof. If yes... then we can prove theorems like pythagoras and others using the same technique... again, the definition said that the initial value problem has a solution not that we can take any constants to prove the theorem. Otherwise if there is some other meaning then please explain.
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:51








          • 1




            $begingroup$
            The theorem says that every initial value problem has a solution. It says that the initial value problem $y(x_0)=1,y'(x_0)=0,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=1,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=0,y''(x_0)=1,dots,y^{(n-1)}(x_0)$ has a solution. And so on, for all $n$ of them. We are simply applying what the theorem says, not inventing anything new.
            $endgroup$
            – jmerry
            Jan 26 at 11:27
















          2












          $begingroup$


          The first one is the linear operator $L$ part in the definition [ how $Lphi_1=Lphi_2=cdots=Lphi_n=0$? what kind of linear operator is this? any example? ]




          We are working on a space of functions - say, all real-valued functions that are infinitely differentiable on a certain interval. This is, of course, a vector space over $mathbb{R}$. The derivative $D$ ($D(f)=f'$) and higher derivatives $D^k(f)=f^{(k)}$ are linear operators from this space to itself. So are multiplication by functions of $x$ $M_{p_i}$ given by $M_{p_i}(y)(x) = p_i(x)y(x)$. Composing these linear operators, as in $y(x)to p_i(x)y^{(n-i)}(x)$, also gives linear operators. Then we add them up and get $L$, the linear operator that takes $y(x)$ to $y^{(n)}(x)+p_1(x)y^{(n-i)}(x)+cdots + p_{n-1}(x)y'(x)+p_n(x)y(x)$.



          That's the point; every linear differential equation is of the form $Ly=0$ for some linear operator $L$, which is defined with a combination of derivative operators, multiplication by functions, and addition.




          ...and second how did the author get to this conclusion from existence and uniqueness theorem?




          We choose our base point $x_0$ for the initial conditions. Then, there is a linear map $X$ from our space of functions to $mathbb{R}^n$ there - we take $f$ to the vector $(f(x_0),f'(x_0),dots,f^{(n-1)}(x_0))$. If the functions $f_1,f_2,dots,f_m$ are linearly dependent with the relation $c_1f_1+c_2f_2+cdots+c_mf_m=0$, then their images $X(f_j)$ satisfy the same dependence relation $c_1X(f_1)+c_2X(f_2)+cdots+c_mX(f_m)=0$. By the contrapositive, if the $X(f_j)$ are linearly independent, so are the $f_j$. (This is a standard fact of linear algebra)



          But then, the existence-uniqueness theorem (existence half) says we can find functions $phi_i$ satisfying the differential equation with $phi_i^{(i-1)}(x_0)=1$ and $phi^{(k)}(x_0)=0$ for each other $k$ in ${0,1,dots,n-1}$. Under our linear map $X$, these $phi_i$ map to a linearly independent set - the standard basis of $mathbb{R}^n$. Pulling back, the $phi_i$ must also be linearly independent.



          Where does the uniqueness half of the existence-uniqueness theorem come in? That tells us that we can't have more than $n$ linearly independent solutions; if we did, we would have a nonzero solution $phi_{n+1}$ with $phi_{n+1}(x_0)=phi_{n+1}'(x_0)=cdots=phi_{n+1}^{(n-1)}(x_0)=0$, and uniqueness rules that out.




          I also checked the wronskian and that is identically 0




          It shouldn't be. It would help to fix the typo there - it should be $phi_1(x_0)=1$. Also, the leading term in the definition of $L$ should be $y^{(n)}$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I understand that from existence theorem we can take the initial value to be 1 but can't we take random variables like k1, k2 .... kn
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:09












          • $begingroup$
            what i mean is that assuming the value(s) to be 1 or 0 is not that much satisfactory
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:16










          • $begingroup$
            Have you read what the existence-uniqueness theorem says? You wrote it down, after all. We can choose for our initial conditions any list of values for $f$ and its first $n-1$ derivatives at the starting point. A list that's all zeros except for a single $1$ is certainly included. Sure, we could take other values, but we don't need to.
            $endgroup$
            – jmerry
            Jan 26 at 10:23










          • $begingroup$
            It says that the initial value has a solution, not that you can choose any constant. Don't you think that this proof will work only when the above condition is satisfied. If not then why not take arbitrary constants again and show the proof. If yes... then we can prove theorems like pythagoras and others using the same technique... again, the definition said that the initial value problem has a solution not that we can take any constants to prove the theorem. Otherwise if there is some other meaning then please explain.
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:51








          • 1




            $begingroup$
            The theorem says that every initial value problem has a solution. It says that the initial value problem $y(x_0)=1,y'(x_0)=0,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=1,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=0,y''(x_0)=1,dots,y^{(n-1)}(x_0)$ has a solution. And so on, for all $n$ of them. We are simply applying what the theorem says, not inventing anything new.
            $endgroup$
            – jmerry
            Jan 26 at 11:27














          2












          2








          2





          $begingroup$


          The first one is the linear operator $L$ part in the definition [ how $Lphi_1=Lphi_2=cdots=Lphi_n=0$? what kind of linear operator is this? any example? ]




          We are working on a space of functions - say, all real-valued functions that are infinitely differentiable on a certain interval. This is, of course, a vector space over $mathbb{R}$. The derivative $D$ ($D(f)=f'$) and higher derivatives $D^k(f)=f^{(k)}$ are linear operators from this space to itself. So are multiplication by functions of $x$ $M_{p_i}$ given by $M_{p_i}(y)(x) = p_i(x)y(x)$. Composing these linear operators, as in $y(x)to p_i(x)y^{(n-i)}(x)$, also gives linear operators. Then we add them up and get $L$, the linear operator that takes $y(x)$ to $y^{(n)}(x)+p_1(x)y^{(n-i)}(x)+cdots + p_{n-1}(x)y'(x)+p_n(x)y(x)$.



          That's the point; every linear differential equation is of the form $Ly=0$ for some linear operator $L$, which is defined with a combination of derivative operators, multiplication by functions, and addition.




          ...and second how did the author get to this conclusion from existence and uniqueness theorem?




          We choose our base point $x_0$ for the initial conditions. Then, there is a linear map $X$ from our space of functions to $mathbb{R}^n$ there - we take $f$ to the vector $(f(x_0),f'(x_0),dots,f^{(n-1)}(x_0))$. If the functions $f_1,f_2,dots,f_m$ are linearly dependent with the relation $c_1f_1+c_2f_2+cdots+c_mf_m=0$, then their images $X(f_j)$ satisfy the same dependence relation $c_1X(f_1)+c_2X(f_2)+cdots+c_mX(f_m)=0$. By the contrapositive, if the $X(f_j)$ are linearly independent, so are the $f_j$. (This is a standard fact of linear algebra)



          But then, the existence-uniqueness theorem (existence half) says we can find functions $phi_i$ satisfying the differential equation with $phi_i^{(i-1)}(x_0)=1$ and $phi^{(k)}(x_0)=0$ for each other $k$ in ${0,1,dots,n-1}$. Under our linear map $X$, these $phi_i$ map to a linearly independent set - the standard basis of $mathbb{R}^n$. Pulling back, the $phi_i$ must also be linearly independent.



          Where does the uniqueness half of the existence-uniqueness theorem come in? That tells us that we can't have more than $n$ linearly independent solutions; if we did, we would have a nonzero solution $phi_{n+1}$ with $phi_{n+1}(x_0)=phi_{n+1}'(x_0)=cdots=phi_{n+1}^{(n-1)}(x_0)=0$, and uniqueness rules that out.




          I also checked the wronskian and that is identically 0




          It shouldn't be. It would help to fix the typo there - it should be $phi_1(x_0)=1$. Also, the leading term in the definition of $L$ should be $y^{(n)}$.






          share|cite|improve this answer









          $endgroup$




          The first one is the linear operator $L$ part in the definition [ how $Lphi_1=Lphi_2=cdots=Lphi_n=0$? what kind of linear operator is this? any example? ]




          We are working on a space of functions - say, all real-valued functions that are infinitely differentiable on a certain interval. This is, of course, a vector space over $mathbb{R}$. The derivative $D$ ($D(f)=f'$) and higher derivatives $D^k(f)=f^{(k)}$ are linear operators from this space to itself. So are multiplication by functions of $x$ $M_{p_i}$ given by $M_{p_i}(y)(x) = p_i(x)y(x)$. Composing these linear operators, as in $y(x)to p_i(x)y^{(n-i)}(x)$, also gives linear operators. Then we add them up and get $L$, the linear operator that takes $y(x)$ to $y^{(n)}(x)+p_1(x)y^{(n-i)}(x)+cdots + p_{n-1}(x)y'(x)+p_n(x)y(x)$.



          That's the point; every linear differential equation is of the form $Ly=0$ for some linear operator $L$, which is defined with a combination of derivative operators, multiplication by functions, and addition.




          ...and second how did the author get to this conclusion from existence and uniqueness theorem?




          We choose our base point $x_0$ for the initial conditions. Then, there is a linear map $X$ from our space of functions to $mathbb{R}^n$ there - we take $f$ to the vector $(f(x_0),f'(x_0),dots,f^{(n-1)}(x_0))$. If the functions $f_1,f_2,dots,f_m$ are linearly dependent with the relation $c_1f_1+c_2f_2+cdots+c_mf_m=0$, then their images $X(f_j)$ satisfy the same dependence relation $c_1X(f_1)+c_2X(f_2)+cdots+c_mX(f_m)=0$. By the contrapositive, if the $X(f_j)$ are linearly independent, so are the $f_j$. (This is a standard fact of linear algebra)



          But then, the existence-uniqueness theorem (existence half) says we can find functions $phi_i$ satisfying the differential equation with $phi_i^{(i-1)}(x_0)=1$ and $phi^{(k)}(x_0)=0$ for each other $k$ in ${0,1,dots,n-1}$. Under our linear map $X$, these $phi_i$ map to a linearly independent set - the standard basis of $mathbb{R}^n$. Pulling back, the $phi_i$ must also be linearly independent.



          Where does the uniqueness half of the existence-uniqueness theorem come in? That tells us that we can't have more than $n$ linearly independent solutions; if we did, we would have a nonzero solution $phi_{n+1}$ with $phi_{n+1}(x_0)=phi_{n+1}'(x_0)=cdots=phi_{n+1}^{(n-1)}(x_0)=0$, and uniqueness rules that out.




          I also checked the wronskian and that is identically 0




          It shouldn't be. It would help to fix the typo there - it should be $phi_1(x_0)=1$. Also, the leading term in the definition of $L$ should be $y^{(n)}$.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jan 26 at 9:01









          jmerryjmerry

          15.3k1632




          15.3k1632












          • $begingroup$
            I understand that from existence theorem we can take the initial value to be 1 but can't we take random variables like k1, k2 .... kn
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:09












          • $begingroup$
            what i mean is that assuming the value(s) to be 1 or 0 is not that much satisfactory
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:16










          • $begingroup$
            Have you read what the existence-uniqueness theorem says? You wrote it down, after all. We can choose for our initial conditions any list of values for $f$ and its first $n-1$ derivatives at the starting point. A list that's all zeros except for a single $1$ is certainly included. Sure, we could take other values, but we don't need to.
            $endgroup$
            – jmerry
            Jan 26 at 10:23










          • $begingroup$
            It says that the initial value has a solution, not that you can choose any constant. Don't you think that this proof will work only when the above condition is satisfied. If not then why not take arbitrary constants again and show the proof. If yes... then we can prove theorems like pythagoras and others using the same technique... again, the definition said that the initial value problem has a solution not that we can take any constants to prove the theorem. Otherwise if there is some other meaning then please explain.
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:51








          • 1




            $begingroup$
            The theorem says that every initial value problem has a solution. It says that the initial value problem $y(x_0)=1,y'(x_0)=0,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=1,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=0,y''(x_0)=1,dots,y^{(n-1)}(x_0)$ has a solution. And so on, for all $n$ of them. We are simply applying what the theorem says, not inventing anything new.
            $endgroup$
            – jmerry
            Jan 26 at 11:27


















          • $begingroup$
            I understand that from existence theorem we can take the initial value to be 1 but can't we take random variables like k1, k2 .... kn
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:09












          • $begingroup$
            what i mean is that assuming the value(s) to be 1 or 0 is not that much satisfactory
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:16










          • $begingroup$
            Have you read what the existence-uniqueness theorem says? You wrote it down, after all. We can choose for our initial conditions any list of values for $f$ and its first $n-1$ derivatives at the starting point. A list that's all zeros except for a single $1$ is certainly included. Sure, we could take other values, but we don't need to.
            $endgroup$
            – jmerry
            Jan 26 at 10:23










          • $begingroup$
            It says that the initial value has a solution, not that you can choose any constant. Don't you think that this proof will work only when the above condition is satisfied. If not then why not take arbitrary constants again and show the proof. If yes... then we can prove theorems like pythagoras and others using the same technique... again, the definition said that the initial value problem has a solution not that we can take any constants to prove the theorem. Otherwise if there is some other meaning then please explain.
            $endgroup$
            – Siddharth Mishra
            Jan 26 at 10:51








          • 1




            $begingroup$
            The theorem says that every initial value problem has a solution. It says that the initial value problem $y(x_0)=1,y'(x_0)=0,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=1,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=0,y''(x_0)=1,dots,y^{(n-1)}(x_0)$ has a solution. And so on, for all $n$ of them. We are simply applying what the theorem says, not inventing anything new.
            $endgroup$
            – jmerry
            Jan 26 at 11:27
















          $begingroup$
          I understand that from existence theorem we can take the initial value to be 1 but can't we take random variables like k1, k2 .... kn
          $endgroup$
          – Siddharth Mishra
          Jan 26 at 10:09






          $begingroup$
          I understand that from existence theorem we can take the initial value to be 1 but can't we take random variables like k1, k2 .... kn
          $endgroup$
          – Siddharth Mishra
          Jan 26 at 10:09














          $begingroup$
          what i mean is that assuming the value(s) to be 1 or 0 is not that much satisfactory
          $endgroup$
          – Siddharth Mishra
          Jan 26 at 10:16




          $begingroup$
          what i mean is that assuming the value(s) to be 1 or 0 is not that much satisfactory
          $endgroup$
          – Siddharth Mishra
          Jan 26 at 10:16












          $begingroup$
          Have you read what the existence-uniqueness theorem says? You wrote it down, after all. We can choose for our initial conditions any list of values for $f$ and its first $n-1$ derivatives at the starting point. A list that's all zeros except for a single $1$ is certainly included. Sure, we could take other values, but we don't need to.
          $endgroup$
          – jmerry
          Jan 26 at 10:23




          $begingroup$
          Have you read what the existence-uniqueness theorem says? You wrote it down, after all. We can choose for our initial conditions any list of values for $f$ and its first $n-1$ derivatives at the starting point. A list that's all zeros except for a single $1$ is certainly included. Sure, we could take other values, but we don't need to.
          $endgroup$
          – jmerry
          Jan 26 at 10:23












          $begingroup$
          It says that the initial value has a solution, not that you can choose any constant. Don't you think that this proof will work only when the above condition is satisfied. If not then why not take arbitrary constants again and show the proof. If yes... then we can prove theorems like pythagoras and others using the same technique... again, the definition said that the initial value problem has a solution not that we can take any constants to prove the theorem. Otherwise if there is some other meaning then please explain.
          $endgroup$
          – Siddharth Mishra
          Jan 26 at 10:51






          $begingroup$
          It says that the initial value has a solution, not that you can choose any constant. Don't you think that this proof will work only when the above condition is satisfied. If not then why not take arbitrary constants again and show the proof. If yes... then we can prove theorems like pythagoras and others using the same technique... again, the definition said that the initial value problem has a solution not that we can take any constants to prove the theorem. Otherwise if there is some other meaning then please explain.
          $endgroup$
          – Siddharth Mishra
          Jan 26 at 10:51






          1




          1




          $begingroup$
          The theorem says that every initial value problem has a solution. It says that the initial value problem $y(x_0)=1,y'(x_0)=0,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=1,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=0,y''(x_0)=1,dots,y^{(n-1)}(x_0)$ has a solution. And so on, for all $n$ of them. We are simply applying what the theorem says, not inventing anything new.
          $endgroup$
          – jmerry
          Jan 26 at 11:27




          $begingroup$
          The theorem says that every initial value problem has a solution. It says that the initial value problem $y(x_0)=1,y'(x_0)=0,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=1,y''(x_0)=0,dots,y^{(n-1)}(x_0)$ has a solution. It also says that the initial value problem $y(x_0)=0,y'(x_0)=0,y''(x_0)=1,dots,y^{(n-1)}(x_0)$ has a solution. And so on, for all $n$ of them. We are simply applying what the theorem says, not inventing anything new.
          $endgroup$
          – jmerry
          Jan 26 at 11:27


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3087984%2falternative-proof-for-linear-independence-of-n-solutions-of-a-nth-order-ordinary%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          'app-layout' is not a known element: how to share Component with different Modules

          android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

          WPF add header to Image with URL pettitions [duplicate]