What are some usual norms for matrices?












10














I am familiar with norms on vectors and functions, but do there exist norms for spaces of matrices i.e. $A$ some $n times m$ matrix?



If so, that would that imply matrices also form some sort of vector space?










share|cite|improve this question
























  • If you want to write down a norm, you need a vector space in the first place, by definition. And what do you mean by "some kind of vector space"? You can add matrices and multiply them by scalars.
    – Mike Miller
    Aug 12 '15 at 7:25






  • 1




    Yes, matrices form vector spaces, and yes, there are (many) norms: en.wikipedia.org/wiki/Matrix_norm
    – Bernhard
    Aug 12 '15 at 7:25










  • @MikeMiller Sorry, I just went through linear algebra last semester and having a hard time remembering that vectors are not just little arrows
    – Shamisen Expert
    Aug 12 '15 at 7:26












  • Sure, there's no need to apologize. Sorry if my comment sounded rude in any way.
    – Mike Miller
    Aug 12 '15 at 7:27
















10














I am familiar with norms on vectors and functions, but do there exist norms for spaces of matrices i.e. $A$ some $n times m$ matrix?



If so, that would that imply matrices also form some sort of vector space?










share|cite|improve this question
























  • If you want to write down a norm, you need a vector space in the first place, by definition. And what do you mean by "some kind of vector space"? You can add matrices and multiply them by scalars.
    – Mike Miller
    Aug 12 '15 at 7:25






  • 1




    Yes, matrices form vector spaces, and yes, there are (many) norms: en.wikipedia.org/wiki/Matrix_norm
    – Bernhard
    Aug 12 '15 at 7:25










  • @MikeMiller Sorry, I just went through linear algebra last semester and having a hard time remembering that vectors are not just little arrows
    – Shamisen Expert
    Aug 12 '15 at 7:26












  • Sure, there's no need to apologize. Sorry if my comment sounded rude in any way.
    – Mike Miller
    Aug 12 '15 at 7:27














10












10








10


11





I am familiar with norms on vectors and functions, but do there exist norms for spaces of matrices i.e. $A$ some $n times m$ matrix?



If so, that would that imply matrices also form some sort of vector space?










share|cite|improve this question















I am familiar with norms on vectors and functions, but do there exist norms for spaces of matrices i.e. $A$ some $n times m$ matrix?



If so, that would that imply matrices also form some sort of vector space?







linear-algebra matrices vector-spaces norm matrix-norms






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 21 '18 at 11:05









Rodrigo de Azevedo

12.8k41855




12.8k41855










asked Aug 12 '15 at 7:23









Shamisen Expert

2,72221944




2,72221944












  • If you want to write down a norm, you need a vector space in the first place, by definition. And what do you mean by "some kind of vector space"? You can add matrices and multiply them by scalars.
    – Mike Miller
    Aug 12 '15 at 7:25






  • 1




    Yes, matrices form vector spaces, and yes, there are (many) norms: en.wikipedia.org/wiki/Matrix_norm
    – Bernhard
    Aug 12 '15 at 7:25










  • @MikeMiller Sorry, I just went through linear algebra last semester and having a hard time remembering that vectors are not just little arrows
    – Shamisen Expert
    Aug 12 '15 at 7:26












  • Sure, there's no need to apologize. Sorry if my comment sounded rude in any way.
    – Mike Miller
    Aug 12 '15 at 7:27


















  • If you want to write down a norm, you need a vector space in the first place, by definition. And what do you mean by "some kind of vector space"? You can add matrices and multiply them by scalars.
    – Mike Miller
    Aug 12 '15 at 7:25






  • 1




    Yes, matrices form vector spaces, and yes, there are (many) norms: en.wikipedia.org/wiki/Matrix_norm
    – Bernhard
    Aug 12 '15 at 7:25










  • @MikeMiller Sorry, I just went through linear algebra last semester and having a hard time remembering that vectors are not just little arrows
    – Shamisen Expert
    Aug 12 '15 at 7:26












  • Sure, there's no need to apologize. Sorry if my comment sounded rude in any way.
    – Mike Miller
    Aug 12 '15 at 7:27
















If you want to write down a norm, you need a vector space in the first place, by definition. And what do you mean by "some kind of vector space"? You can add matrices and multiply them by scalars.
– Mike Miller
Aug 12 '15 at 7:25




If you want to write down a norm, you need a vector space in the first place, by definition. And what do you mean by "some kind of vector space"? You can add matrices and multiply them by scalars.
– Mike Miller
Aug 12 '15 at 7:25




1




1




Yes, matrices form vector spaces, and yes, there are (many) norms: en.wikipedia.org/wiki/Matrix_norm
– Bernhard
Aug 12 '15 at 7:25




Yes, matrices form vector spaces, and yes, there are (many) norms: en.wikipedia.org/wiki/Matrix_norm
– Bernhard
Aug 12 '15 at 7:25












@MikeMiller Sorry, I just went through linear algebra last semester and having a hard time remembering that vectors are not just little arrows
– Shamisen Expert
Aug 12 '15 at 7:26






@MikeMiller Sorry, I just went through linear algebra last semester and having a hard time remembering that vectors are not just little arrows
– Shamisen Expert
Aug 12 '15 at 7:26














Sure, there's no need to apologize. Sorry if my comment sounded rude in any way.
– Mike Miller
Aug 12 '15 at 7:27




Sure, there's no need to apologize. Sorry if my comment sounded rude in any way.
– Mike Miller
Aug 12 '15 at 7:27










2 Answers
2






active

oldest

votes


















22














First of all, yes: the matrices form some sort of vector space. You can add any two matrices, and you can multiply matrices by a number, and you'll always get another matrix. In a sense, that's all you need for a set to be a vector space. Matrices have a little bit more structure too: for one, you can multiply two matrices together (which you can't generally do with vectors). Moreover, matrices are really linear maps. I'll get back to those in a minute.



There are three kinds of matrix norms, each of which is useful in different circumstances.





Norms ("just" a norm):



Sometimes a norm is just a norm. Often, it's useful to think of a matrix as "a box of numbers" in the same way that you would think of a vector in $Bbb R^n$ as a "list of numbers". A "matrix norm" by this definition is any function on the matrices that satisfies the usual rules that define a norm. In particular, for any matrices $A,B in Bbb R^{n times m}$ and constant $alpha$, we need to have






  1. $|A| geq 0$, with $|A| = 0 iff A = 0$

  2. $|alpha A| = |alpha||A|$

  3. $|A + B| leq |A| + |B|$




You would use these norms any time you would use an ordinary norm. One reason we would need this kind of norm is to show that a function involving matrices is "continuous", or "differentiable". The usual example of this kind of norm is the "entrywise $p$-norm", which is given by
$$
|A| = left(sum_{i=1}^n sum_{j=1}^m |a_{ij}|^pright)^{1/p}
$$

for $1 leq p leq infty$.



Every matrix norm can be thought of in this way, i.e. as a "general norm". However, sometimes we want our matrix norm to have a bit more structure.





Submultiplicative Norms (AKA "matrix norms")




We say that a matrix norm $|cdot|$ is submultiplicative if, in addition to being a norm, it also satisfies the inequality
$$
|AB| leq |A| cdot |B|
$$

For any square matrices $A,B$ of the same size




A lot of times, your everyday norm just won't cut it. For those occasions, submultiplicative norms tend to come in handy. These are useful for dealing with "polynomials" on matrices since we have inequalities like
$$
|f(A)| = left|sum_{k}a_kA^k right| leq sum_k |a_k||A|^k
$$

Notably, if the $a_k$ are non-negative, $|f(A)| leq f(|A|)$, so that we have $|e^A| leq e^{|A|}$ for instance.



Multiplicative norms are also very useful for spectral (eigenvalue) analysis. In fact, we have some theorems involving $rho(A)$, the spectral radius of $A$, and any submultiplicative norm:




  • $|A| geq rho(A)$

  • $rho(A) = lim_{k to infty} |A^k|^{1/k}$

  • $rho(A) = inf_{|cdot| text{ is submult.}} |A|$


The classic example of a submultiplicative norm is the Frobenius norm, AKA the entrywise $2$-norm, AKA the Schatten $2$-norm:
$$
|A|_F = sqrt{sum_{i = 1}^nsum_{j=1}^m |a_{ij}|^2}
$$

This is probably the most commonly used of all matrix norms. It is particularly useful since it is the norm derived from the Frobenius inner product (AKA Hilbert-Schmidt inner product). That is, it turns out that taking the "dot product" of matrices is a useful thing to do, and the Frobenius norm is the norm that results from this dot product.



The Schatten norms (and other unitarily invariant norms) are also submultiplicative, and get a fair bit of use. The entrywise $p$-norms from earlier only happen to be submultiplicative norms when $1 leq p leq 2$; these are easy to compute, but tend not give tight bounds.



Finally, we might want our norms to be nicer still.





Operator Norms (AKA "induced/derived norms")




Suppose $|cdot |$ is a vector norm on $Bbb R^n$. We define the corresponding operator norm on $Bbb R^{m times n}$ to be given by
$$
|A| = sup_{|x| leq 1} |Ax|
$$




Every operator norm is a submultiplicative norm. However, not every submultiplicative norm is an operator norm. Besides doing everything that the submultiplicative norms can do, operator norms are useful when you're thinking about how matrices act on vectors. In particular, with operator norms, we have the inequality
$$
|Av| leq |A|cdot |v|
$$

It follows that for every operator norm, the identity matrix $I$ has the property $|I| = 1$. This fact turns out to have some useful consequences (e.g. inequalities involving the norm of a matrix's inverse).



Most of the norms that we have mentioned are not operator norms. The operator norm that you see most often is the one derived from the Euclidean norm ($2$-norm) on vectors. In particular, we have
$$
|A|_2 = sup_{|x| leq 1} |Ax|_2 = sigma_1(A)
$$

That is, this norm is equal to the largest singular value of $A$. This norm also happens to coincide with the "Schatten $infty$-norm", one of the Schatten-norms discussed above.



A particularly useful property of this norm is that $|A|_2 = rho(A)$ whenever $A$ happens to be normal (i.e. whenever $A^TA = AA^T$). Because of this property, $|cdot|_2$ is sometimes called the "spectral norm".



Two other operator norms that are commonly used (especially in the context of numerical linear algebra) are the one derived from the $1$-norm ("taxicab norm") and the one derived from the $infty$-norm ("max norm"). These are straightforward to compute; in particular, we have
$$
|A|_1= max_j sum_{i=1}^m |A_{ij}|\
|A|_{infty}= max_i sum_{j=1}^n |A_{ij}|
$$






share|cite|improve this answer































    0














    Let $A=(a_{i,j})inmathcal M_{n,m}(Bbb C)$. These some norms on $mathcal M_{n,m}(Bbb C)$ which are equivalent




    • $$|A|=sum_{i,j}|a_{i,j}|$$

    • $$|A|^2=sum_{i,j}|a_{i,j}|^2$$

    • $$|A|=max_{i}sum_{j}|a_{i,j}|$$

    • $$|A|=max_{j}sum_{i}|a_{i,j}|$$
      and of course we can see an analogy with the norms defined on $Bbb C^{nm}$.






    share|cite|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1394113%2fwhat-are-some-usual-norms-for-matrices%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      22














      First of all, yes: the matrices form some sort of vector space. You can add any two matrices, and you can multiply matrices by a number, and you'll always get another matrix. In a sense, that's all you need for a set to be a vector space. Matrices have a little bit more structure too: for one, you can multiply two matrices together (which you can't generally do with vectors). Moreover, matrices are really linear maps. I'll get back to those in a minute.



      There are three kinds of matrix norms, each of which is useful in different circumstances.





      Norms ("just" a norm):



      Sometimes a norm is just a norm. Often, it's useful to think of a matrix as "a box of numbers" in the same way that you would think of a vector in $Bbb R^n$ as a "list of numbers". A "matrix norm" by this definition is any function on the matrices that satisfies the usual rules that define a norm. In particular, for any matrices $A,B in Bbb R^{n times m}$ and constant $alpha$, we need to have






      1. $|A| geq 0$, with $|A| = 0 iff A = 0$

      2. $|alpha A| = |alpha||A|$

      3. $|A + B| leq |A| + |B|$




      You would use these norms any time you would use an ordinary norm. One reason we would need this kind of norm is to show that a function involving matrices is "continuous", or "differentiable". The usual example of this kind of norm is the "entrywise $p$-norm", which is given by
      $$
      |A| = left(sum_{i=1}^n sum_{j=1}^m |a_{ij}|^pright)^{1/p}
      $$

      for $1 leq p leq infty$.



      Every matrix norm can be thought of in this way, i.e. as a "general norm". However, sometimes we want our matrix norm to have a bit more structure.





      Submultiplicative Norms (AKA "matrix norms")




      We say that a matrix norm $|cdot|$ is submultiplicative if, in addition to being a norm, it also satisfies the inequality
      $$
      |AB| leq |A| cdot |B|
      $$

      For any square matrices $A,B$ of the same size




      A lot of times, your everyday norm just won't cut it. For those occasions, submultiplicative norms tend to come in handy. These are useful for dealing with "polynomials" on matrices since we have inequalities like
      $$
      |f(A)| = left|sum_{k}a_kA^k right| leq sum_k |a_k||A|^k
      $$

      Notably, if the $a_k$ are non-negative, $|f(A)| leq f(|A|)$, so that we have $|e^A| leq e^{|A|}$ for instance.



      Multiplicative norms are also very useful for spectral (eigenvalue) analysis. In fact, we have some theorems involving $rho(A)$, the spectral radius of $A$, and any submultiplicative norm:




      • $|A| geq rho(A)$

      • $rho(A) = lim_{k to infty} |A^k|^{1/k}$

      • $rho(A) = inf_{|cdot| text{ is submult.}} |A|$


      The classic example of a submultiplicative norm is the Frobenius norm, AKA the entrywise $2$-norm, AKA the Schatten $2$-norm:
      $$
      |A|_F = sqrt{sum_{i = 1}^nsum_{j=1}^m |a_{ij}|^2}
      $$

      This is probably the most commonly used of all matrix norms. It is particularly useful since it is the norm derived from the Frobenius inner product (AKA Hilbert-Schmidt inner product). That is, it turns out that taking the "dot product" of matrices is a useful thing to do, and the Frobenius norm is the norm that results from this dot product.



      The Schatten norms (and other unitarily invariant norms) are also submultiplicative, and get a fair bit of use. The entrywise $p$-norms from earlier only happen to be submultiplicative norms when $1 leq p leq 2$; these are easy to compute, but tend not give tight bounds.



      Finally, we might want our norms to be nicer still.





      Operator Norms (AKA "induced/derived norms")




      Suppose $|cdot |$ is a vector norm on $Bbb R^n$. We define the corresponding operator norm on $Bbb R^{m times n}$ to be given by
      $$
      |A| = sup_{|x| leq 1} |Ax|
      $$




      Every operator norm is a submultiplicative norm. However, not every submultiplicative norm is an operator norm. Besides doing everything that the submultiplicative norms can do, operator norms are useful when you're thinking about how matrices act on vectors. In particular, with operator norms, we have the inequality
      $$
      |Av| leq |A|cdot |v|
      $$

      It follows that for every operator norm, the identity matrix $I$ has the property $|I| = 1$. This fact turns out to have some useful consequences (e.g. inequalities involving the norm of a matrix's inverse).



      Most of the norms that we have mentioned are not operator norms. The operator norm that you see most often is the one derived from the Euclidean norm ($2$-norm) on vectors. In particular, we have
      $$
      |A|_2 = sup_{|x| leq 1} |Ax|_2 = sigma_1(A)
      $$

      That is, this norm is equal to the largest singular value of $A$. This norm also happens to coincide with the "Schatten $infty$-norm", one of the Schatten-norms discussed above.



      A particularly useful property of this norm is that $|A|_2 = rho(A)$ whenever $A$ happens to be normal (i.e. whenever $A^TA = AA^T$). Because of this property, $|cdot|_2$ is sometimes called the "spectral norm".



      Two other operator norms that are commonly used (especially in the context of numerical linear algebra) are the one derived from the $1$-norm ("taxicab norm") and the one derived from the $infty$-norm ("max norm"). These are straightforward to compute; in particular, we have
      $$
      |A|_1= max_j sum_{i=1}^m |A_{ij}|\
      |A|_{infty}= max_i sum_{j=1}^n |A_{ij}|
      $$






      share|cite|improve this answer




























        22














        First of all, yes: the matrices form some sort of vector space. You can add any two matrices, and you can multiply matrices by a number, and you'll always get another matrix. In a sense, that's all you need for a set to be a vector space. Matrices have a little bit more structure too: for one, you can multiply two matrices together (which you can't generally do with vectors). Moreover, matrices are really linear maps. I'll get back to those in a minute.



        There are three kinds of matrix norms, each of which is useful in different circumstances.





        Norms ("just" a norm):



        Sometimes a norm is just a norm. Often, it's useful to think of a matrix as "a box of numbers" in the same way that you would think of a vector in $Bbb R^n$ as a "list of numbers". A "matrix norm" by this definition is any function on the matrices that satisfies the usual rules that define a norm. In particular, for any matrices $A,B in Bbb R^{n times m}$ and constant $alpha$, we need to have






        1. $|A| geq 0$, with $|A| = 0 iff A = 0$

        2. $|alpha A| = |alpha||A|$

        3. $|A + B| leq |A| + |B|$




        You would use these norms any time you would use an ordinary norm. One reason we would need this kind of norm is to show that a function involving matrices is "continuous", or "differentiable". The usual example of this kind of norm is the "entrywise $p$-norm", which is given by
        $$
        |A| = left(sum_{i=1}^n sum_{j=1}^m |a_{ij}|^pright)^{1/p}
        $$

        for $1 leq p leq infty$.



        Every matrix norm can be thought of in this way, i.e. as a "general norm". However, sometimes we want our matrix norm to have a bit more structure.





        Submultiplicative Norms (AKA "matrix norms")




        We say that a matrix norm $|cdot|$ is submultiplicative if, in addition to being a norm, it also satisfies the inequality
        $$
        |AB| leq |A| cdot |B|
        $$

        For any square matrices $A,B$ of the same size




        A lot of times, your everyday norm just won't cut it. For those occasions, submultiplicative norms tend to come in handy. These are useful for dealing with "polynomials" on matrices since we have inequalities like
        $$
        |f(A)| = left|sum_{k}a_kA^k right| leq sum_k |a_k||A|^k
        $$

        Notably, if the $a_k$ are non-negative, $|f(A)| leq f(|A|)$, so that we have $|e^A| leq e^{|A|}$ for instance.



        Multiplicative norms are also very useful for spectral (eigenvalue) analysis. In fact, we have some theorems involving $rho(A)$, the spectral radius of $A$, and any submultiplicative norm:




        • $|A| geq rho(A)$

        • $rho(A) = lim_{k to infty} |A^k|^{1/k}$

        • $rho(A) = inf_{|cdot| text{ is submult.}} |A|$


        The classic example of a submultiplicative norm is the Frobenius norm, AKA the entrywise $2$-norm, AKA the Schatten $2$-norm:
        $$
        |A|_F = sqrt{sum_{i = 1}^nsum_{j=1}^m |a_{ij}|^2}
        $$

        This is probably the most commonly used of all matrix norms. It is particularly useful since it is the norm derived from the Frobenius inner product (AKA Hilbert-Schmidt inner product). That is, it turns out that taking the "dot product" of matrices is a useful thing to do, and the Frobenius norm is the norm that results from this dot product.



        The Schatten norms (and other unitarily invariant norms) are also submultiplicative, and get a fair bit of use. The entrywise $p$-norms from earlier only happen to be submultiplicative norms when $1 leq p leq 2$; these are easy to compute, but tend not give tight bounds.



        Finally, we might want our norms to be nicer still.





        Operator Norms (AKA "induced/derived norms")




        Suppose $|cdot |$ is a vector norm on $Bbb R^n$. We define the corresponding operator norm on $Bbb R^{m times n}$ to be given by
        $$
        |A| = sup_{|x| leq 1} |Ax|
        $$




        Every operator norm is a submultiplicative norm. However, not every submultiplicative norm is an operator norm. Besides doing everything that the submultiplicative norms can do, operator norms are useful when you're thinking about how matrices act on vectors. In particular, with operator norms, we have the inequality
        $$
        |Av| leq |A|cdot |v|
        $$

        It follows that for every operator norm, the identity matrix $I$ has the property $|I| = 1$. This fact turns out to have some useful consequences (e.g. inequalities involving the norm of a matrix's inverse).



        Most of the norms that we have mentioned are not operator norms. The operator norm that you see most often is the one derived from the Euclidean norm ($2$-norm) on vectors. In particular, we have
        $$
        |A|_2 = sup_{|x| leq 1} |Ax|_2 = sigma_1(A)
        $$

        That is, this norm is equal to the largest singular value of $A$. This norm also happens to coincide with the "Schatten $infty$-norm", one of the Schatten-norms discussed above.



        A particularly useful property of this norm is that $|A|_2 = rho(A)$ whenever $A$ happens to be normal (i.e. whenever $A^TA = AA^T$). Because of this property, $|cdot|_2$ is sometimes called the "spectral norm".



        Two other operator norms that are commonly used (especially in the context of numerical linear algebra) are the one derived from the $1$-norm ("taxicab norm") and the one derived from the $infty$-norm ("max norm"). These are straightforward to compute; in particular, we have
        $$
        |A|_1= max_j sum_{i=1}^m |A_{ij}|\
        |A|_{infty}= max_i sum_{j=1}^n |A_{ij}|
        $$






        share|cite|improve this answer


























          22












          22








          22






          First of all, yes: the matrices form some sort of vector space. You can add any two matrices, and you can multiply matrices by a number, and you'll always get another matrix. In a sense, that's all you need for a set to be a vector space. Matrices have a little bit more structure too: for one, you can multiply two matrices together (which you can't generally do with vectors). Moreover, matrices are really linear maps. I'll get back to those in a minute.



          There are three kinds of matrix norms, each of which is useful in different circumstances.





          Norms ("just" a norm):



          Sometimes a norm is just a norm. Often, it's useful to think of a matrix as "a box of numbers" in the same way that you would think of a vector in $Bbb R^n$ as a "list of numbers". A "matrix norm" by this definition is any function on the matrices that satisfies the usual rules that define a norm. In particular, for any matrices $A,B in Bbb R^{n times m}$ and constant $alpha$, we need to have






          1. $|A| geq 0$, with $|A| = 0 iff A = 0$

          2. $|alpha A| = |alpha||A|$

          3. $|A + B| leq |A| + |B|$




          You would use these norms any time you would use an ordinary norm. One reason we would need this kind of norm is to show that a function involving matrices is "continuous", or "differentiable". The usual example of this kind of norm is the "entrywise $p$-norm", which is given by
          $$
          |A| = left(sum_{i=1}^n sum_{j=1}^m |a_{ij}|^pright)^{1/p}
          $$

          for $1 leq p leq infty$.



          Every matrix norm can be thought of in this way, i.e. as a "general norm". However, sometimes we want our matrix norm to have a bit more structure.





          Submultiplicative Norms (AKA "matrix norms")




          We say that a matrix norm $|cdot|$ is submultiplicative if, in addition to being a norm, it also satisfies the inequality
          $$
          |AB| leq |A| cdot |B|
          $$

          For any square matrices $A,B$ of the same size




          A lot of times, your everyday norm just won't cut it. For those occasions, submultiplicative norms tend to come in handy. These are useful for dealing with "polynomials" on matrices since we have inequalities like
          $$
          |f(A)| = left|sum_{k}a_kA^k right| leq sum_k |a_k||A|^k
          $$

          Notably, if the $a_k$ are non-negative, $|f(A)| leq f(|A|)$, so that we have $|e^A| leq e^{|A|}$ for instance.



          Multiplicative norms are also very useful for spectral (eigenvalue) analysis. In fact, we have some theorems involving $rho(A)$, the spectral radius of $A$, and any submultiplicative norm:




          • $|A| geq rho(A)$

          • $rho(A) = lim_{k to infty} |A^k|^{1/k}$

          • $rho(A) = inf_{|cdot| text{ is submult.}} |A|$


          The classic example of a submultiplicative norm is the Frobenius norm, AKA the entrywise $2$-norm, AKA the Schatten $2$-norm:
          $$
          |A|_F = sqrt{sum_{i = 1}^nsum_{j=1}^m |a_{ij}|^2}
          $$

          This is probably the most commonly used of all matrix norms. It is particularly useful since it is the norm derived from the Frobenius inner product (AKA Hilbert-Schmidt inner product). That is, it turns out that taking the "dot product" of matrices is a useful thing to do, and the Frobenius norm is the norm that results from this dot product.



          The Schatten norms (and other unitarily invariant norms) are also submultiplicative, and get a fair bit of use. The entrywise $p$-norms from earlier only happen to be submultiplicative norms when $1 leq p leq 2$; these are easy to compute, but tend not give tight bounds.



          Finally, we might want our norms to be nicer still.





          Operator Norms (AKA "induced/derived norms")




          Suppose $|cdot |$ is a vector norm on $Bbb R^n$. We define the corresponding operator norm on $Bbb R^{m times n}$ to be given by
          $$
          |A| = sup_{|x| leq 1} |Ax|
          $$




          Every operator norm is a submultiplicative norm. However, not every submultiplicative norm is an operator norm. Besides doing everything that the submultiplicative norms can do, operator norms are useful when you're thinking about how matrices act on vectors. In particular, with operator norms, we have the inequality
          $$
          |Av| leq |A|cdot |v|
          $$

          It follows that for every operator norm, the identity matrix $I$ has the property $|I| = 1$. This fact turns out to have some useful consequences (e.g. inequalities involving the norm of a matrix's inverse).



          Most of the norms that we have mentioned are not operator norms. The operator norm that you see most often is the one derived from the Euclidean norm ($2$-norm) on vectors. In particular, we have
          $$
          |A|_2 = sup_{|x| leq 1} |Ax|_2 = sigma_1(A)
          $$

          That is, this norm is equal to the largest singular value of $A$. This norm also happens to coincide with the "Schatten $infty$-norm", one of the Schatten-norms discussed above.



          A particularly useful property of this norm is that $|A|_2 = rho(A)$ whenever $A$ happens to be normal (i.e. whenever $A^TA = AA^T$). Because of this property, $|cdot|_2$ is sometimes called the "spectral norm".



          Two other operator norms that are commonly used (especially in the context of numerical linear algebra) are the one derived from the $1$-norm ("taxicab norm") and the one derived from the $infty$-norm ("max norm"). These are straightforward to compute; in particular, we have
          $$
          |A|_1= max_j sum_{i=1}^m |A_{ij}|\
          |A|_{infty}= max_i sum_{j=1}^n |A_{ij}|
          $$






          share|cite|improve this answer














          First of all, yes: the matrices form some sort of vector space. You can add any two matrices, and you can multiply matrices by a number, and you'll always get another matrix. In a sense, that's all you need for a set to be a vector space. Matrices have a little bit more structure too: for one, you can multiply two matrices together (which you can't generally do with vectors). Moreover, matrices are really linear maps. I'll get back to those in a minute.



          There are three kinds of matrix norms, each of which is useful in different circumstances.





          Norms ("just" a norm):



          Sometimes a norm is just a norm. Often, it's useful to think of a matrix as "a box of numbers" in the same way that you would think of a vector in $Bbb R^n$ as a "list of numbers". A "matrix norm" by this definition is any function on the matrices that satisfies the usual rules that define a norm. In particular, for any matrices $A,B in Bbb R^{n times m}$ and constant $alpha$, we need to have






          1. $|A| geq 0$, with $|A| = 0 iff A = 0$

          2. $|alpha A| = |alpha||A|$

          3. $|A + B| leq |A| + |B|$




          You would use these norms any time you would use an ordinary norm. One reason we would need this kind of norm is to show that a function involving matrices is "continuous", or "differentiable". The usual example of this kind of norm is the "entrywise $p$-norm", which is given by
          $$
          |A| = left(sum_{i=1}^n sum_{j=1}^m |a_{ij}|^pright)^{1/p}
          $$

          for $1 leq p leq infty$.



          Every matrix norm can be thought of in this way, i.e. as a "general norm". However, sometimes we want our matrix norm to have a bit more structure.





          Submultiplicative Norms (AKA "matrix norms")




          We say that a matrix norm $|cdot|$ is submultiplicative if, in addition to being a norm, it also satisfies the inequality
          $$
          |AB| leq |A| cdot |B|
          $$

          For any square matrices $A,B$ of the same size




          A lot of times, your everyday norm just won't cut it. For those occasions, submultiplicative norms tend to come in handy. These are useful for dealing with "polynomials" on matrices since we have inequalities like
          $$
          |f(A)| = left|sum_{k}a_kA^k right| leq sum_k |a_k||A|^k
          $$

          Notably, if the $a_k$ are non-negative, $|f(A)| leq f(|A|)$, so that we have $|e^A| leq e^{|A|}$ for instance.



          Multiplicative norms are also very useful for spectral (eigenvalue) analysis. In fact, we have some theorems involving $rho(A)$, the spectral radius of $A$, and any submultiplicative norm:




          • $|A| geq rho(A)$

          • $rho(A) = lim_{k to infty} |A^k|^{1/k}$

          • $rho(A) = inf_{|cdot| text{ is submult.}} |A|$


          The classic example of a submultiplicative norm is the Frobenius norm, AKA the entrywise $2$-norm, AKA the Schatten $2$-norm:
          $$
          |A|_F = sqrt{sum_{i = 1}^nsum_{j=1}^m |a_{ij}|^2}
          $$

          This is probably the most commonly used of all matrix norms. It is particularly useful since it is the norm derived from the Frobenius inner product (AKA Hilbert-Schmidt inner product). That is, it turns out that taking the "dot product" of matrices is a useful thing to do, and the Frobenius norm is the norm that results from this dot product.



          The Schatten norms (and other unitarily invariant norms) are also submultiplicative, and get a fair bit of use. The entrywise $p$-norms from earlier only happen to be submultiplicative norms when $1 leq p leq 2$; these are easy to compute, but tend not give tight bounds.



          Finally, we might want our norms to be nicer still.





          Operator Norms (AKA "induced/derived norms")




          Suppose $|cdot |$ is a vector norm on $Bbb R^n$. We define the corresponding operator norm on $Bbb R^{m times n}$ to be given by
          $$
          |A| = sup_{|x| leq 1} |Ax|
          $$




          Every operator norm is a submultiplicative norm. However, not every submultiplicative norm is an operator norm. Besides doing everything that the submultiplicative norms can do, operator norms are useful when you're thinking about how matrices act on vectors. In particular, with operator norms, we have the inequality
          $$
          |Av| leq |A|cdot |v|
          $$

          It follows that for every operator norm, the identity matrix $I$ has the property $|I| = 1$. This fact turns out to have some useful consequences (e.g. inequalities involving the norm of a matrix's inverse).



          Most of the norms that we have mentioned are not operator norms. The operator norm that you see most often is the one derived from the Euclidean norm ($2$-norm) on vectors. In particular, we have
          $$
          |A|_2 = sup_{|x| leq 1} |Ax|_2 = sigma_1(A)
          $$

          That is, this norm is equal to the largest singular value of $A$. This norm also happens to coincide with the "Schatten $infty$-norm", one of the Schatten-norms discussed above.



          A particularly useful property of this norm is that $|A|_2 = rho(A)$ whenever $A$ happens to be normal (i.e. whenever $A^TA = AA^T$). Because of this property, $|cdot|_2$ is sometimes called the "spectral norm".



          Two other operator norms that are commonly used (especially in the context of numerical linear algebra) are the one derived from the $1$-norm ("taxicab norm") and the one derived from the $infty$-norm ("max norm"). These are straightforward to compute; in particular, we have
          $$
          |A|_1= max_j sum_{i=1}^m |A_{ij}|\
          |A|_{infty}= max_i sum_{j=1}^n |A_{ij}|
          $$







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Nov 20 '18 at 23:13

























          answered Aug 12 '15 at 14:37









          Omnomnomnom

          126k788176




          126k788176























              0














              Let $A=(a_{i,j})inmathcal M_{n,m}(Bbb C)$. These some norms on $mathcal M_{n,m}(Bbb C)$ which are equivalent




              • $$|A|=sum_{i,j}|a_{i,j}|$$

              • $$|A|^2=sum_{i,j}|a_{i,j}|^2$$

              • $$|A|=max_{i}sum_{j}|a_{i,j}|$$

              • $$|A|=max_{j}sum_{i}|a_{i,j}|$$
                and of course we can see an analogy with the norms defined on $Bbb C^{nm}$.






              share|cite|improve this answer


























                0














                Let $A=(a_{i,j})inmathcal M_{n,m}(Bbb C)$. These some norms on $mathcal M_{n,m}(Bbb C)$ which are equivalent




                • $$|A|=sum_{i,j}|a_{i,j}|$$

                • $$|A|^2=sum_{i,j}|a_{i,j}|^2$$

                • $$|A|=max_{i}sum_{j}|a_{i,j}|$$

                • $$|A|=max_{j}sum_{i}|a_{i,j}|$$
                  and of course we can see an analogy with the norms defined on $Bbb C^{nm}$.






                share|cite|improve this answer
























                  0












                  0








                  0






                  Let $A=(a_{i,j})inmathcal M_{n,m}(Bbb C)$. These some norms on $mathcal M_{n,m}(Bbb C)$ which are equivalent




                  • $$|A|=sum_{i,j}|a_{i,j}|$$

                  • $$|A|^2=sum_{i,j}|a_{i,j}|^2$$

                  • $$|A|=max_{i}sum_{j}|a_{i,j}|$$

                  • $$|A|=max_{j}sum_{i}|a_{i,j}|$$
                    and of course we can see an analogy with the norms defined on $Bbb C^{nm}$.






                  share|cite|improve this answer












                  Let $A=(a_{i,j})inmathcal M_{n,m}(Bbb C)$. These some norms on $mathcal M_{n,m}(Bbb C)$ which are equivalent




                  • $$|A|=sum_{i,j}|a_{i,j}|$$

                  • $$|A|^2=sum_{i,j}|a_{i,j}|^2$$

                  • $$|A|=max_{i}sum_{j}|a_{i,j}|$$

                  • $$|A|=max_{j}sum_{i}|a_{i,j}|$$
                    and of course we can see an analogy with the norms defined on $Bbb C^{nm}$.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Aug 12 '15 at 7:32







                  user260717





































                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1394113%2fwhat-are-some-usual-norms-for-matrices%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      MongoDB - Not Authorized To Execute Command

                      How to fix TextFormField cause rebuild widget in Flutter

                      in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith