When is the nth component of a (co)vector equal to its scalar product with the nth element of its dual basis?












0












$begingroup$


My solution to a problem in a tensors book is different from the solution in the book, and I don't know why. Here's the problem:



$$vec e_1 = (2, 1) hspace{1em} vec e_2 = (-1, 3) \ text{Find the dual basis of covectors.} $$



I decided to use the formula



$$ V^alpha = vec V (tilde e^alpha) $$



which equates the $alpha$th component of the vector $vec V$ to its scalar product with the $alpha$th basis covector.



Defining



$$ tilde e^1 = (a, b) hspace{1em} tilde e^2 = (c, d) $$



this yields the equations (hoping I'm using upper- and lower-indices correctly here and not confusing anyone)



$$
2 = vec e_1^1 = vec e_1 tilde e^1 = 2a + b \
1 = vec e_1^2 = vec e_1 tilde e^2 = 2c + d \
-1 = vec e_2^1 = vec e_2 tilde e^1 = -a + 3b \
3 = vec e_2^2 = vec e_2 tilde e^2 = -c + 3d
$$



Solving these equations for $a$, $b$, $c$, and $d$, I get



$$ tilde e^1 = (1, 0) hspace{1em} tilde e_2 = (0, 1) $$



The solution in the book was to use the duality condition



$$ langle tilde e^alpha, vec e_beta rangle = delta ^alpha _beta $$



from whence it derived a system of equations like mine but with $2, 1, -1, 3$ replaced with $1, 0, 0, 1$, yidelding the dual basis



$$ tilde e^1 = left( frac 3 7, frac 1 7 right) hspace{1em} tilde e^2 = left( -frac 1 7, frac 2 7 right) $$



for which it is not the case that $V^alpha = vec V (tilde e^alpha)$ as one can easily verify: checking with $V = vec e_1$ we have $2 = V^1 neq vec V tilde e^1 = 2 cdot frac 3 7 + 1 cdot frac 1 7 = 1$.



Theirs definitely seems more correct, but I'm wondering why my solution was incorrect. Does the formula I chose not apply if the basis (co)-vectors aren't orthogonal?










share|cite|improve this question











$endgroup$

















    0












    $begingroup$


    My solution to a problem in a tensors book is different from the solution in the book, and I don't know why. Here's the problem:



    $$vec e_1 = (2, 1) hspace{1em} vec e_2 = (-1, 3) \ text{Find the dual basis of covectors.} $$



    I decided to use the formula



    $$ V^alpha = vec V (tilde e^alpha) $$



    which equates the $alpha$th component of the vector $vec V$ to its scalar product with the $alpha$th basis covector.



    Defining



    $$ tilde e^1 = (a, b) hspace{1em} tilde e^2 = (c, d) $$



    this yields the equations (hoping I'm using upper- and lower-indices correctly here and not confusing anyone)



    $$
    2 = vec e_1^1 = vec e_1 tilde e^1 = 2a + b \
    1 = vec e_1^2 = vec e_1 tilde e^2 = 2c + d \
    -1 = vec e_2^1 = vec e_2 tilde e^1 = -a + 3b \
    3 = vec e_2^2 = vec e_2 tilde e^2 = -c + 3d
    $$



    Solving these equations for $a$, $b$, $c$, and $d$, I get



    $$ tilde e^1 = (1, 0) hspace{1em} tilde e_2 = (0, 1) $$



    The solution in the book was to use the duality condition



    $$ langle tilde e^alpha, vec e_beta rangle = delta ^alpha _beta $$



    from whence it derived a system of equations like mine but with $2, 1, -1, 3$ replaced with $1, 0, 0, 1$, yidelding the dual basis



    $$ tilde e^1 = left( frac 3 7, frac 1 7 right) hspace{1em} tilde e^2 = left( -frac 1 7, frac 2 7 right) $$



    for which it is not the case that $V^alpha = vec V (tilde e^alpha)$ as one can easily verify: checking with $V = vec e_1$ we have $2 = V^1 neq vec V tilde e^1 = 2 cdot frac 3 7 + 1 cdot frac 1 7 = 1$.



    Theirs definitely seems more correct, but I'm wondering why my solution was incorrect. Does the formula I chose not apply if the basis (co)-vectors aren't orthogonal?










    share|cite|improve this question











    $endgroup$















      0












      0








      0





      $begingroup$


      My solution to a problem in a tensors book is different from the solution in the book, and I don't know why. Here's the problem:



      $$vec e_1 = (2, 1) hspace{1em} vec e_2 = (-1, 3) \ text{Find the dual basis of covectors.} $$



      I decided to use the formula



      $$ V^alpha = vec V (tilde e^alpha) $$



      which equates the $alpha$th component of the vector $vec V$ to its scalar product with the $alpha$th basis covector.



      Defining



      $$ tilde e^1 = (a, b) hspace{1em} tilde e^2 = (c, d) $$



      this yields the equations (hoping I'm using upper- and lower-indices correctly here and not confusing anyone)



      $$
      2 = vec e_1^1 = vec e_1 tilde e^1 = 2a + b \
      1 = vec e_1^2 = vec e_1 tilde e^2 = 2c + d \
      -1 = vec e_2^1 = vec e_2 tilde e^1 = -a + 3b \
      3 = vec e_2^2 = vec e_2 tilde e^2 = -c + 3d
      $$



      Solving these equations for $a$, $b$, $c$, and $d$, I get



      $$ tilde e^1 = (1, 0) hspace{1em} tilde e_2 = (0, 1) $$



      The solution in the book was to use the duality condition



      $$ langle tilde e^alpha, vec e_beta rangle = delta ^alpha _beta $$



      from whence it derived a system of equations like mine but with $2, 1, -1, 3$ replaced with $1, 0, 0, 1$, yidelding the dual basis



      $$ tilde e^1 = left( frac 3 7, frac 1 7 right) hspace{1em} tilde e^2 = left( -frac 1 7, frac 2 7 right) $$



      for which it is not the case that $V^alpha = vec V (tilde e^alpha)$ as one can easily verify: checking with $V = vec e_1$ we have $2 = V^1 neq vec V tilde e^1 = 2 cdot frac 3 7 + 1 cdot frac 1 7 = 1$.



      Theirs definitely seems more correct, but I'm wondering why my solution was incorrect. Does the formula I chose not apply if the basis (co)-vectors aren't orthogonal?










      share|cite|improve this question











      $endgroup$




      My solution to a problem in a tensors book is different from the solution in the book, and I don't know why. Here's the problem:



      $$vec e_1 = (2, 1) hspace{1em} vec e_2 = (-1, 3) \ text{Find the dual basis of covectors.} $$



      I decided to use the formula



      $$ V^alpha = vec V (tilde e^alpha) $$



      which equates the $alpha$th component of the vector $vec V$ to its scalar product with the $alpha$th basis covector.



      Defining



      $$ tilde e^1 = (a, b) hspace{1em} tilde e^2 = (c, d) $$



      this yields the equations (hoping I'm using upper- and lower-indices correctly here and not confusing anyone)



      $$
      2 = vec e_1^1 = vec e_1 tilde e^1 = 2a + b \
      1 = vec e_1^2 = vec e_1 tilde e^2 = 2c + d \
      -1 = vec e_2^1 = vec e_2 tilde e^1 = -a + 3b \
      3 = vec e_2^2 = vec e_2 tilde e^2 = -c + 3d
      $$



      Solving these equations for $a$, $b$, $c$, and $d$, I get



      $$ tilde e^1 = (1, 0) hspace{1em} tilde e_2 = (0, 1) $$



      The solution in the book was to use the duality condition



      $$ langle tilde e^alpha, vec e_beta rangle = delta ^alpha _beta $$



      from whence it derived a system of equations like mine but with $2, 1, -1, 3$ replaced with $1, 0, 0, 1$, yidelding the dual basis



      $$ tilde e^1 = left( frac 3 7, frac 1 7 right) hspace{1em} tilde e^2 = left( -frac 1 7, frac 2 7 right) $$



      for which it is not the case that $V^alpha = vec V (tilde e^alpha)$ as one can easily verify: checking with $V = vec e_1$ we have $2 = V^1 neq vec V tilde e^1 = 2 cdot frac 3 7 + 1 cdot frac 1 7 = 1$.



      Theirs definitely seems more correct, but I'm wondering why my solution was incorrect. Does the formula I chose not apply if the basis (co)-vectors aren't orthogonal?







      tensors






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 23 at 1:27







      jcarpenter2

















      asked Jan 22 at 8:44









      jcarpenter2jcarpenter2

      1137




      1137






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          The confusion here stems from interpreting the coefficients of the basis vectors as the actual basis vectors themselves. In this particular exercise, if you take for instance,



          $$vec e_1 = begin{bmatrix} 2 & 1end{bmatrix} ^top $$



          the question that begs being answered is, What is $2$ and what is $1$? They certainly form a vector in the sense of a list, but they actually are coefficients of another (tacitly un-spoken) basis, which we could symbolize as ${color{red}{vec u_1},color{red}{vec u_2}},$ so that



          $$vec e_1 = 2color{red}{vec u_1} + 1 color{red}{vec u_2}$$



          and, likewise,



          $$vec e_2 = begin{bmatrix}-1 & 3 end{bmatrix}^top $$



          really implies,



          $$vec e_2 = -1color{red}{vec u_1} + 3 color{red}{vec u_2}$$



          Your proposed system of equations in the OP is simply designed to end up recovering orthonormal covector coordinates. This is what your matching of the LHS of the system of equations with the corresponding coefficient winds up producing, now assuming an underlying co-vector basis ${color{blue}{ tilde u^1},color{blue}{ tilde u^2}}$:



          $$tilde e^1 = 1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}$$



          and



          $$tilde e^2 = 0color{blue}{tilde u^1} + 1 color{blue}{tilde u^2}$$



          in your proposed answer.



          But, by skipping the Kronecker delta setup for the dual vector space basis pairing with the vector space basis in your proposed answer, you are simply deferring addressing how you match vector and co-vector basis:



          What would be the inner product of these basis vectors and covectors? For instance,



          $$begin{align}
          langle tilde e^1,vec e_1rangle &= left(1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
          &= 2 color{blue}{tilde u^1}color{red}{vec u_1}+1color{blue}{tilde u^1}color{red}{vec u_2}
          end{align}$$



          leave both $ color{blue}{tilde u^1}color{red}{vec u_1}$ and $color{blue}{tilde u^1}color{red}{vec u_2}$ undefined.



          The way the exercise is actually solved in the book implies that the un-spoken underlying vector basis, ${color{red}{vec u_1},color{red}{vec u_2}}$ are the orthonormal standard Euclidean basis, linked to the co-vector basis ${color{blue}{tilde u^1},color{blue}{tilde u^2}}$ through the Kronecker function, so that



          $$begin{align}
          langle tilde e^1,vec e_1rangle &= left(frac 3 7color{blue}{tilde u^1} + frac 1 7 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
          &= frac 6 7 color{blue}{tilde u^1}color{red}{vec u_1}+frac 3 7 color{blue}{tilde u^1}color{red}{vec u_2}+frac 2 7 color{blue}{tilde u^2}color{red}{vec u_1}+frac 1 7color{blue}{tilde u^2}color{red}{vec u_2}\
          &=frac 6 7 1 + frac 1 7 1\
          &=1
          end{align}$$



          works out as implicitly desired only if $color{blue}{tilde u^alpha}color{red}{vec u_beta}=delta^alpha_beta.$






          share|cite|improve this answer











          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082889%2fwhen-is-the-nth-component-of-a-covector-equal-to-its-scalar-product-with-the-n%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            The confusion here stems from interpreting the coefficients of the basis vectors as the actual basis vectors themselves. In this particular exercise, if you take for instance,



            $$vec e_1 = begin{bmatrix} 2 & 1end{bmatrix} ^top $$



            the question that begs being answered is, What is $2$ and what is $1$? They certainly form a vector in the sense of a list, but they actually are coefficients of another (tacitly un-spoken) basis, which we could symbolize as ${color{red}{vec u_1},color{red}{vec u_2}},$ so that



            $$vec e_1 = 2color{red}{vec u_1} + 1 color{red}{vec u_2}$$



            and, likewise,



            $$vec e_2 = begin{bmatrix}-1 & 3 end{bmatrix}^top $$



            really implies,



            $$vec e_2 = -1color{red}{vec u_1} + 3 color{red}{vec u_2}$$



            Your proposed system of equations in the OP is simply designed to end up recovering orthonormal covector coordinates. This is what your matching of the LHS of the system of equations with the corresponding coefficient winds up producing, now assuming an underlying co-vector basis ${color{blue}{ tilde u^1},color{blue}{ tilde u^2}}$:



            $$tilde e^1 = 1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}$$



            and



            $$tilde e^2 = 0color{blue}{tilde u^1} + 1 color{blue}{tilde u^2}$$



            in your proposed answer.



            But, by skipping the Kronecker delta setup for the dual vector space basis pairing with the vector space basis in your proposed answer, you are simply deferring addressing how you match vector and co-vector basis:



            What would be the inner product of these basis vectors and covectors? For instance,



            $$begin{align}
            langle tilde e^1,vec e_1rangle &= left(1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
            &= 2 color{blue}{tilde u^1}color{red}{vec u_1}+1color{blue}{tilde u^1}color{red}{vec u_2}
            end{align}$$



            leave both $ color{blue}{tilde u^1}color{red}{vec u_1}$ and $color{blue}{tilde u^1}color{red}{vec u_2}$ undefined.



            The way the exercise is actually solved in the book implies that the un-spoken underlying vector basis, ${color{red}{vec u_1},color{red}{vec u_2}}$ are the orthonormal standard Euclidean basis, linked to the co-vector basis ${color{blue}{tilde u^1},color{blue}{tilde u^2}}$ through the Kronecker function, so that



            $$begin{align}
            langle tilde e^1,vec e_1rangle &= left(frac 3 7color{blue}{tilde u^1} + frac 1 7 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
            &= frac 6 7 color{blue}{tilde u^1}color{red}{vec u_1}+frac 3 7 color{blue}{tilde u^1}color{red}{vec u_2}+frac 2 7 color{blue}{tilde u^2}color{red}{vec u_1}+frac 1 7color{blue}{tilde u^2}color{red}{vec u_2}\
            &=frac 6 7 1 + frac 1 7 1\
            &=1
            end{align}$$



            works out as implicitly desired only if $color{blue}{tilde u^alpha}color{red}{vec u_beta}=delta^alpha_beta.$






            share|cite|improve this answer











            $endgroup$


















              0












              $begingroup$

              The confusion here stems from interpreting the coefficients of the basis vectors as the actual basis vectors themselves. In this particular exercise, if you take for instance,



              $$vec e_1 = begin{bmatrix} 2 & 1end{bmatrix} ^top $$



              the question that begs being answered is, What is $2$ and what is $1$? They certainly form a vector in the sense of a list, but they actually are coefficients of another (tacitly un-spoken) basis, which we could symbolize as ${color{red}{vec u_1},color{red}{vec u_2}},$ so that



              $$vec e_1 = 2color{red}{vec u_1} + 1 color{red}{vec u_2}$$



              and, likewise,



              $$vec e_2 = begin{bmatrix}-1 & 3 end{bmatrix}^top $$



              really implies,



              $$vec e_2 = -1color{red}{vec u_1} + 3 color{red}{vec u_2}$$



              Your proposed system of equations in the OP is simply designed to end up recovering orthonormal covector coordinates. This is what your matching of the LHS of the system of equations with the corresponding coefficient winds up producing, now assuming an underlying co-vector basis ${color{blue}{ tilde u^1},color{blue}{ tilde u^2}}$:



              $$tilde e^1 = 1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}$$



              and



              $$tilde e^2 = 0color{blue}{tilde u^1} + 1 color{blue}{tilde u^2}$$



              in your proposed answer.



              But, by skipping the Kronecker delta setup for the dual vector space basis pairing with the vector space basis in your proposed answer, you are simply deferring addressing how you match vector and co-vector basis:



              What would be the inner product of these basis vectors and covectors? For instance,



              $$begin{align}
              langle tilde e^1,vec e_1rangle &= left(1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
              &= 2 color{blue}{tilde u^1}color{red}{vec u_1}+1color{blue}{tilde u^1}color{red}{vec u_2}
              end{align}$$



              leave both $ color{blue}{tilde u^1}color{red}{vec u_1}$ and $color{blue}{tilde u^1}color{red}{vec u_2}$ undefined.



              The way the exercise is actually solved in the book implies that the un-spoken underlying vector basis, ${color{red}{vec u_1},color{red}{vec u_2}}$ are the orthonormal standard Euclidean basis, linked to the co-vector basis ${color{blue}{tilde u^1},color{blue}{tilde u^2}}$ through the Kronecker function, so that



              $$begin{align}
              langle tilde e^1,vec e_1rangle &= left(frac 3 7color{blue}{tilde u^1} + frac 1 7 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
              &= frac 6 7 color{blue}{tilde u^1}color{red}{vec u_1}+frac 3 7 color{blue}{tilde u^1}color{red}{vec u_2}+frac 2 7 color{blue}{tilde u^2}color{red}{vec u_1}+frac 1 7color{blue}{tilde u^2}color{red}{vec u_2}\
              &=frac 6 7 1 + frac 1 7 1\
              &=1
              end{align}$$



              works out as implicitly desired only if $color{blue}{tilde u^alpha}color{red}{vec u_beta}=delta^alpha_beta.$






              share|cite|improve this answer











              $endgroup$
















                0












                0








                0





                $begingroup$

                The confusion here stems from interpreting the coefficients of the basis vectors as the actual basis vectors themselves. In this particular exercise, if you take for instance,



                $$vec e_1 = begin{bmatrix} 2 & 1end{bmatrix} ^top $$



                the question that begs being answered is, What is $2$ and what is $1$? They certainly form a vector in the sense of a list, but they actually are coefficients of another (tacitly un-spoken) basis, which we could symbolize as ${color{red}{vec u_1},color{red}{vec u_2}},$ so that



                $$vec e_1 = 2color{red}{vec u_1} + 1 color{red}{vec u_2}$$



                and, likewise,



                $$vec e_2 = begin{bmatrix}-1 & 3 end{bmatrix}^top $$



                really implies,



                $$vec e_2 = -1color{red}{vec u_1} + 3 color{red}{vec u_2}$$



                Your proposed system of equations in the OP is simply designed to end up recovering orthonormal covector coordinates. This is what your matching of the LHS of the system of equations with the corresponding coefficient winds up producing, now assuming an underlying co-vector basis ${color{blue}{ tilde u^1},color{blue}{ tilde u^2}}$:



                $$tilde e^1 = 1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}$$



                and



                $$tilde e^2 = 0color{blue}{tilde u^1} + 1 color{blue}{tilde u^2}$$



                in your proposed answer.



                But, by skipping the Kronecker delta setup for the dual vector space basis pairing with the vector space basis in your proposed answer, you are simply deferring addressing how you match vector and co-vector basis:



                What would be the inner product of these basis vectors and covectors? For instance,



                $$begin{align}
                langle tilde e^1,vec e_1rangle &= left(1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
                &= 2 color{blue}{tilde u^1}color{red}{vec u_1}+1color{blue}{tilde u^1}color{red}{vec u_2}
                end{align}$$



                leave both $ color{blue}{tilde u^1}color{red}{vec u_1}$ and $color{blue}{tilde u^1}color{red}{vec u_2}$ undefined.



                The way the exercise is actually solved in the book implies that the un-spoken underlying vector basis, ${color{red}{vec u_1},color{red}{vec u_2}}$ are the orthonormal standard Euclidean basis, linked to the co-vector basis ${color{blue}{tilde u^1},color{blue}{tilde u^2}}$ through the Kronecker function, so that



                $$begin{align}
                langle tilde e^1,vec e_1rangle &= left(frac 3 7color{blue}{tilde u^1} + frac 1 7 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
                &= frac 6 7 color{blue}{tilde u^1}color{red}{vec u_1}+frac 3 7 color{blue}{tilde u^1}color{red}{vec u_2}+frac 2 7 color{blue}{tilde u^2}color{red}{vec u_1}+frac 1 7color{blue}{tilde u^2}color{red}{vec u_2}\
                &=frac 6 7 1 + frac 1 7 1\
                &=1
                end{align}$$



                works out as implicitly desired only if $color{blue}{tilde u^alpha}color{red}{vec u_beta}=delta^alpha_beta.$






                share|cite|improve this answer











                $endgroup$



                The confusion here stems from interpreting the coefficients of the basis vectors as the actual basis vectors themselves. In this particular exercise, if you take for instance,



                $$vec e_1 = begin{bmatrix} 2 & 1end{bmatrix} ^top $$



                the question that begs being answered is, What is $2$ and what is $1$? They certainly form a vector in the sense of a list, but they actually are coefficients of another (tacitly un-spoken) basis, which we could symbolize as ${color{red}{vec u_1},color{red}{vec u_2}},$ so that



                $$vec e_1 = 2color{red}{vec u_1} + 1 color{red}{vec u_2}$$



                and, likewise,



                $$vec e_2 = begin{bmatrix}-1 & 3 end{bmatrix}^top $$



                really implies,



                $$vec e_2 = -1color{red}{vec u_1} + 3 color{red}{vec u_2}$$



                Your proposed system of equations in the OP is simply designed to end up recovering orthonormal covector coordinates. This is what your matching of the LHS of the system of equations with the corresponding coefficient winds up producing, now assuming an underlying co-vector basis ${color{blue}{ tilde u^1},color{blue}{ tilde u^2}}$:



                $$tilde e^1 = 1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}$$



                and



                $$tilde e^2 = 0color{blue}{tilde u^1} + 1 color{blue}{tilde u^2}$$



                in your proposed answer.



                But, by skipping the Kronecker delta setup for the dual vector space basis pairing with the vector space basis in your proposed answer, you are simply deferring addressing how you match vector and co-vector basis:



                What would be the inner product of these basis vectors and covectors? For instance,



                $$begin{align}
                langle tilde e^1,vec e_1rangle &= left(1color{blue}{tilde u^1} + 0 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
                &= 2 color{blue}{tilde u^1}color{red}{vec u_1}+1color{blue}{tilde u^1}color{red}{vec u_2}
                end{align}$$



                leave both $ color{blue}{tilde u^1}color{red}{vec u_1}$ and $color{blue}{tilde u^1}color{red}{vec u_2}$ undefined.



                The way the exercise is actually solved in the book implies that the un-spoken underlying vector basis, ${color{red}{vec u_1},color{red}{vec u_2}}$ are the orthonormal standard Euclidean basis, linked to the co-vector basis ${color{blue}{tilde u^1},color{blue}{tilde u^2}}$ through the Kronecker function, so that



                $$begin{align}
                langle tilde e^1,vec e_1rangle &= left(frac 3 7color{blue}{tilde u^1} + frac 1 7 color{blue}{tilde u^2}right)left(2color{red}{vec u_1} + 1 color{red}{vec u_2}right)\
                &= frac 6 7 color{blue}{tilde u^1}color{red}{vec u_1}+frac 3 7 color{blue}{tilde u^1}color{red}{vec u_2}+frac 2 7 color{blue}{tilde u^2}color{red}{vec u_1}+frac 1 7color{blue}{tilde u^2}color{red}{vec u_2}\
                &=frac 6 7 1 + frac 1 7 1\
                &=1
                end{align}$$



                works out as implicitly desired only if $color{blue}{tilde u^alpha}color{red}{vec u_beta}=delta^alpha_beta.$







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Jan 23 at 13:30

























                answered Jan 23 at 2:40









                Antoni ParelladaAntoni Parellada

                3,07421341




                3,07421341






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082889%2fwhen-is-the-nth-component-of-a-covector-equal-to-its-scalar-product-with-the-n%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    'app-layout' is not a known element: how to share Component with different Modules

                    android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

                    WPF add header to Image with URL pettitions [duplicate]