Markov matrices with no ones on off-diagonals don't have -1 as an eigen value












1












$begingroup$


Every time I saw a Markov matrix with an Eigen value of -1, it had some 1's on its off-diagonals. The most obvious example is a simple permutation matrix:



$$M = left(begin{array}{ccc}0&1\1&0end{array}right)$$



With eigen values 1 and -1. As soon as you subtract $epsilon$ from the 1's and add them to the 0's, the -1 eigen value decreases in magnitude.



For another example, see the matrix $M$ this answer. Again, an eigen value of -1 with multiple 1's on the off-diagonals.



Is there a way to prove or refute this claim?



All I have so far are examples to support it and haven't been able to come up with a direction for how to prove it.





Why do I care about this result? Because I'm considering Markov matrices formed from coin flips where the coins can never be one-sided. Such matrices might have 1's on their diagonals, but never on the off-diagonals.



If this is true, we can write for such a matrix $M$,



$$vM^n = c_1+c_2lambda_1^n+dots$$



Since $lambda_1$ and other eigen values must be $<1$, we can say as $n to infty$,



$$vM^n = c_1$$










share|cite|improve this question









$endgroup$

















    1












    $begingroup$


    Every time I saw a Markov matrix with an Eigen value of -1, it had some 1's on its off-diagonals. The most obvious example is a simple permutation matrix:



    $$M = left(begin{array}{ccc}0&1\1&0end{array}right)$$



    With eigen values 1 and -1. As soon as you subtract $epsilon$ from the 1's and add them to the 0's, the -1 eigen value decreases in magnitude.



    For another example, see the matrix $M$ this answer. Again, an eigen value of -1 with multiple 1's on the off-diagonals.



    Is there a way to prove or refute this claim?



    All I have so far are examples to support it and haven't been able to come up with a direction for how to prove it.





    Why do I care about this result? Because I'm considering Markov matrices formed from coin flips where the coins can never be one-sided. Such matrices might have 1's on their diagonals, but never on the off-diagonals.



    If this is true, we can write for such a matrix $M$,



    $$vM^n = c_1+c_2lambda_1^n+dots$$



    Since $lambda_1$ and other eigen values must be $<1$, we can say as $n to infty$,



    $$vM^n = c_1$$










    share|cite|improve this question









    $endgroup$















      1












      1








      1





      $begingroup$


      Every time I saw a Markov matrix with an Eigen value of -1, it had some 1's on its off-diagonals. The most obvious example is a simple permutation matrix:



      $$M = left(begin{array}{ccc}0&1\1&0end{array}right)$$



      With eigen values 1 and -1. As soon as you subtract $epsilon$ from the 1's and add them to the 0's, the -1 eigen value decreases in magnitude.



      For another example, see the matrix $M$ this answer. Again, an eigen value of -1 with multiple 1's on the off-diagonals.



      Is there a way to prove or refute this claim?



      All I have so far are examples to support it and haven't been able to come up with a direction for how to prove it.





      Why do I care about this result? Because I'm considering Markov matrices formed from coin flips where the coins can never be one-sided. Such matrices might have 1's on their diagonals, but never on the off-diagonals.



      If this is true, we can write for such a matrix $M$,



      $$vM^n = c_1+c_2lambda_1^n+dots$$



      Since $lambda_1$ and other eigen values must be $<1$, we can say as $n to infty$,



      $$vM^n = c_1$$










      share|cite|improve this question









      $endgroup$




      Every time I saw a Markov matrix with an Eigen value of -1, it had some 1's on its off-diagonals. The most obvious example is a simple permutation matrix:



      $$M = left(begin{array}{ccc}0&1\1&0end{array}right)$$



      With eigen values 1 and -1. As soon as you subtract $epsilon$ from the 1's and add them to the 0's, the -1 eigen value decreases in magnitude.



      For another example, see the matrix $M$ this answer. Again, an eigen value of -1 with multiple 1's on the off-diagonals.



      Is there a way to prove or refute this claim?



      All I have so far are examples to support it and haven't been able to come up with a direction for how to prove it.





      Why do I care about this result? Because I'm considering Markov matrices formed from coin flips where the coins can never be one-sided. Such matrices might have 1's on their diagonals, but never on the off-diagonals.



      If this is true, we can write for such a matrix $M$,



      $$vM^n = c_1+c_2lambda_1^n+dots$$



      Since $lambda_1$ and other eigen values must be $<1$, we can say as $n to infty$,



      $$vM^n = c_1$$







      linear-algebra matrices stochastic-processes eigenvalues-eigenvectors markov-chains






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Jan 18 at 8:05









      Rohit PandeyRohit Pandey

      1,4071022




      1,4071022






















          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          Yes, there's a way to refute the claim - a counterexample.



          The matrix $begin{bmatrix}0&0&frac12&frac12\ 0&0&frac12&frac12\ frac12&frac12&0&0\ frac12&frac12&0&0end{bmatrix}$ has $-1$ as an eigenvalue with eigenvector $begin{bmatrix}1\1\-1\-1end{bmatrix}$.



          Eigenvalues of $-1$ are associated with bipartite graphs; if every step alternates between two subgraphs, there will be an eigenvector for $-1$ weighting everything in one of those subgraphs positive and everything in the other negative. Other roots of unity are associated with directed cycles of longer lengths; they're less likely to come up unless you specifically rig the system to get them, but they're possible.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            I thought I had... I shouldn't post so much in the wee hours, should I?
            $endgroup$
            – jmerry
            Jan 18 at 13:55










          • $begingroup$
            Thanks, is there a proof that alternating between subgraphs causes the matrix to have - 1 as an eigenvalue?
            $endgroup$
            – Rohit Pandey
            Jan 18 at 16:44










          • $begingroup$
            Actually, that's easy to see.. But how do we prove that alternating is the only way to get a -1?
            $endgroup$
            – Rohit Pandey
            Jan 18 at 16:52








          • 2




            $begingroup$
            Consider the eigenvector for $-1$. Split into positive entries and negative entries, then look at the sums for each part. Any edges within the same part work against us so it's not an eigenvector.
            $endgroup$
            – jmerry
            Jan 18 at 20:18



















          1












          $begingroup$

          Too long for a comment.




          The only thing that can derail my argument in the blog is if the matrices have an eigen value, -1




          Then eigenvalue $−1$ of the matrix $M$ does not cause big problems for the investigation, because to it correspond an eigenvalue $1$ of the matrix $M^2$, and we can consider the convergence of odd and even powers separately. This is a direction in which I 'm going to continue my answer to the linked question.



          The same approach is applicable for eigenvalues $xi$ which are roots of unity of a small order $m$. Moreover, if all entries of $ntimes n$ matrix $M$ are rational, then order $[Bbb Q(xi):Bbb Q]$ of the extension $Bbb Q(xi)$ over $Bbb Q$ is at most $varphi(m)$, where $varphi$ is the Euler function (see, for instance, [L, Ch. VIII, $S$3, Theorem 6]). This allows us bound $m$ in terms of $n$ as follows.



          Let $m=p_1^{m_1}dots p_k^{m_k}$ be a product of powers of distinct primes. Then, according to [M]



          $$varphi(m)=mleft(1-frac 1{p_1}right)dotsleft(1-frac 1{p_k}right)ge frac{cm}{log_e log_e m}.$$



          I guees that $c$ is a constant. Then more aysmpotically weak but more concrete lower bound for $varphi(m)$ follows from an observation that $klelog_2 m$ and $1-frac 1{p_i}ge 1-frac 1{i+1}$ for each $i$, so



          $$varphi(m)ge mfrac 12frac 23cdotsfrac k{k+1}=frac m{k+1}ge frac{m}{log_2 m +1}.$$



          References



          [L] Serge Lang, Algebra, Addison-Wesley Publ., 1965 (Russian translation, M.:Mir, 1968).



          [M] Mathematical encyclopedia, vol. 5, 1985 (in Russian).






          share|cite|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3077955%2fmarkov-matrices-with-no-ones-on-off-diagonals-dont-have-1-as-an-eigen-value%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            Yes, there's a way to refute the claim - a counterexample.



            The matrix $begin{bmatrix}0&0&frac12&frac12\ 0&0&frac12&frac12\ frac12&frac12&0&0\ frac12&frac12&0&0end{bmatrix}$ has $-1$ as an eigenvalue with eigenvector $begin{bmatrix}1\1\-1\-1end{bmatrix}$.



            Eigenvalues of $-1$ are associated with bipartite graphs; if every step alternates between two subgraphs, there will be an eigenvector for $-1$ weighting everything in one of those subgraphs positive and everything in the other negative. Other roots of unity are associated with directed cycles of longer lengths; they're less likely to come up unless you specifically rig the system to get them, but they're possible.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              I thought I had... I shouldn't post so much in the wee hours, should I?
              $endgroup$
              – jmerry
              Jan 18 at 13:55










            • $begingroup$
              Thanks, is there a proof that alternating between subgraphs causes the matrix to have - 1 as an eigenvalue?
              $endgroup$
              – Rohit Pandey
              Jan 18 at 16:44










            • $begingroup$
              Actually, that's easy to see.. But how do we prove that alternating is the only way to get a -1?
              $endgroup$
              – Rohit Pandey
              Jan 18 at 16:52








            • 2




              $begingroup$
              Consider the eigenvector for $-1$. Split into positive entries and negative entries, then look at the sums for each part. Any edges within the same part work against us so it's not an eigenvector.
              $endgroup$
              – jmerry
              Jan 18 at 20:18
















            2












            $begingroup$

            Yes, there's a way to refute the claim - a counterexample.



            The matrix $begin{bmatrix}0&0&frac12&frac12\ 0&0&frac12&frac12\ frac12&frac12&0&0\ frac12&frac12&0&0end{bmatrix}$ has $-1$ as an eigenvalue with eigenvector $begin{bmatrix}1\1\-1\-1end{bmatrix}$.



            Eigenvalues of $-1$ are associated with bipartite graphs; if every step alternates between two subgraphs, there will be an eigenvector for $-1$ weighting everything in one of those subgraphs positive and everything in the other negative. Other roots of unity are associated with directed cycles of longer lengths; they're less likely to come up unless you specifically rig the system to get them, but they're possible.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              I thought I had... I shouldn't post so much in the wee hours, should I?
              $endgroup$
              – jmerry
              Jan 18 at 13:55










            • $begingroup$
              Thanks, is there a proof that alternating between subgraphs causes the matrix to have - 1 as an eigenvalue?
              $endgroup$
              – Rohit Pandey
              Jan 18 at 16:44










            • $begingroup$
              Actually, that's easy to see.. But how do we prove that alternating is the only way to get a -1?
              $endgroup$
              – Rohit Pandey
              Jan 18 at 16:52








            • 2




              $begingroup$
              Consider the eigenvector for $-1$. Split into positive entries and negative entries, then look at the sums for each part. Any edges within the same part work against us so it's not an eigenvector.
              $endgroup$
              – jmerry
              Jan 18 at 20:18














            2












            2








            2





            $begingroup$

            Yes, there's a way to refute the claim - a counterexample.



            The matrix $begin{bmatrix}0&0&frac12&frac12\ 0&0&frac12&frac12\ frac12&frac12&0&0\ frac12&frac12&0&0end{bmatrix}$ has $-1$ as an eigenvalue with eigenvector $begin{bmatrix}1\1\-1\-1end{bmatrix}$.



            Eigenvalues of $-1$ are associated with bipartite graphs; if every step alternates between two subgraphs, there will be an eigenvector for $-1$ weighting everything in one of those subgraphs positive and everything in the other negative. Other roots of unity are associated with directed cycles of longer lengths; they're less likely to come up unless you specifically rig the system to get them, but they're possible.






            share|cite|improve this answer











            $endgroup$



            Yes, there's a way to refute the claim - a counterexample.



            The matrix $begin{bmatrix}0&0&frac12&frac12\ 0&0&frac12&frac12\ frac12&frac12&0&0\ frac12&frac12&0&0end{bmatrix}$ has $-1$ as an eigenvalue with eigenvector $begin{bmatrix}1\1\-1\-1end{bmatrix}$.



            Eigenvalues of $-1$ are associated with bipartite graphs; if every step alternates between two subgraphs, there will be an eigenvector for $-1$ weighting everything in one of those subgraphs positive and everything in the other negative. Other roots of unity are associated with directed cycles of longer lengths; they're less likely to come up unless you specifically rig the system to get them, but they're possible.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Jan 18 at 13:54

























            answered Jan 18 at 9:04









            jmerryjmerry

            10.8k1225




            10.8k1225












            • $begingroup$
              I thought I had... I shouldn't post so much in the wee hours, should I?
              $endgroup$
              – jmerry
              Jan 18 at 13:55










            • $begingroup$
              Thanks, is there a proof that alternating between subgraphs causes the matrix to have - 1 as an eigenvalue?
              $endgroup$
              – Rohit Pandey
              Jan 18 at 16:44










            • $begingroup$
              Actually, that's easy to see.. But how do we prove that alternating is the only way to get a -1?
              $endgroup$
              – Rohit Pandey
              Jan 18 at 16:52








            • 2




              $begingroup$
              Consider the eigenvector for $-1$. Split into positive entries and negative entries, then look at the sums for each part. Any edges within the same part work against us so it's not an eigenvector.
              $endgroup$
              – jmerry
              Jan 18 at 20:18


















            • $begingroup$
              I thought I had... I shouldn't post so much in the wee hours, should I?
              $endgroup$
              – jmerry
              Jan 18 at 13:55










            • $begingroup$
              Thanks, is there a proof that alternating between subgraphs causes the matrix to have - 1 as an eigenvalue?
              $endgroup$
              – Rohit Pandey
              Jan 18 at 16:44










            • $begingroup$
              Actually, that's easy to see.. But how do we prove that alternating is the only way to get a -1?
              $endgroup$
              – Rohit Pandey
              Jan 18 at 16:52








            • 2




              $begingroup$
              Consider the eigenvector for $-1$. Split into positive entries and negative entries, then look at the sums for each part. Any edges within the same part work against us so it's not an eigenvector.
              $endgroup$
              – jmerry
              Jan 18 at 20:18
















            $begingroup$
            I thought I had... I shouldn't post so much in the wee hours, should I?
            $endgroup$
            – jmerry
            Jan 18 at 13:55




            $begingroup$
            I thought I had... I shouldn't post so much in the wee hours, should I?
            $endgroup$
            – jmerry
            Jan 18 at 13:55












            $begingroup$
            Thanks, is there a proof that alternating between subgraphs causes the matrix to have - 1 as an eigenvalue?
            $endgroup$
            – Rohit Pandey
            Jan 18 at 16:44




            $begingroup$
            Thanks, is there a proof that alternating between subgraphs causes the matrix to have - 1 as an eigenvalue?
            $endgroup$
            – Rohit Pandey
            Jan 18 at 16:44












            $begingroup$
            Actually, that's easy to see.. But how do we prove that alternating is the only way to get a -1?
            $endgroup$
            – Rohit Pandey
            Jan 18 at 16:52






            $begingroup$
            Actually, that's easy to see.. But how do we prove that alternating is the only way to get a -1?
            $endgroup$
            – Rohit Pandey
            Jan 18 at 16:52






            2




            2




            $begingroup$
            Consider the eigenvector for $-1$. Split into positive entries and negative entries, then look at the sums for each part. Any edges within the same part work against us so it's not an eigenvector.
            $endgroup$
            – jmerry
            Jan 18 at 20:18




            $begingroup$
            Consider the eigenvector for $-1$. Split into positive entries and negative entries, then look at the sums for each part. Any edges within the same part work against us so it's not an eigenvector.
            $endgroup$
            – jmerry
            Jan 18 at 20:18











            1












            $begingroup$

            Too long for a comment.




            The only thing that can derail my argument in the blog is if the matrices have an eigen value, -1




            Then eigenvalue $−1$ of the matrix $M$ does not cause big problems for the investigation, because to it correspond an eigenvalue $1$ of the matrix $M^2$, and we can consider the convergence of odd and even powers separately. This is a direction in which I 'm going to continue my answer to the linked question.



            The same approach is applicable for eigenvalues $xi$ which are roots of unity of a small order $m$. Moreover, if all entries of $ntimes n$ matrix $M$ are rational, then order $[Bbb Q(xi):Bbb Q]$ of the extension $Bbb Q(xi)$ over $Bbb Q$ is at most $varphi(m)$, where $varphi$ is the Euler function (see, for instance, [L, Ch. VIII, $S$3, Theorem 6]). This allows us bound $m$ in terms of $n$ as follows.



            Let $m=p_1^{m_1}dots p_k^{m_k}$ be a product of powers of distinct primes. Then, according to [M]



            $$varphi(m)=mleft(1-frac 1{p_1}right)dotsleft(1-frac 1{p_k}right)ge frac{cm}{log_e log_e m}.$$



            I guees that $c$ is a constant. Then more aysmpotically weak but more concrete lower bound for $varphi(m)$ follows from an observation that $klelog_2 m$ and $1-frac 1{p_i}ge 1-frac 1{i+1}$ for each $i$, so



            $$varphi(m)ge mfrac 12frac 23cdotsfrac k{k+1}=frac m{k+1}ge frac{m}{log_2 m +1}.$$



            References



            [L] Serge Lang, Algebra, Addison-Wesley Publ., 1965 (Russian translation, M.:Mir, 1968).



            [M] Mathematical encyclopedia, vol. 5, 1985 (in Russian).






            share|cite|improve this answer









            $endgroup$


















              1












              $begingroup$

              Too long for a comment.




              The only thing that can derail my argument in the blog is if the matrices have an eigen value, -1




              Then eigenvalue $−1$ of the matrix $M$ does not cause big problems for the investigation, because to it correspond an eigenvalue $1$ of the matrix $M^2$, and we can consider the convergence of odd and even powers separately. This is a direction in which I 'm going to continue my answer to the linked question.



              The same approach is applicable for eigenvalues $xi$ which are roots of unity of a small order $m$. Moreover, if all entries of $ntimes n$ matrix $M$ are rational, then order $[Bbb Q(xi):Bbb Q]$ of the extension $Bbb Q(xi)$ over $Bbb Q$ is at most $varphi(m)$, where $varphi$ is the Euler function (see, for instance, [L, Ch. VIII, $S$3, Theorem 6]). This allows us bound $m$ in terms of $n$ as follows.



              Let $m=p_1^{m_1}dots p_k^{m_k}$ be a product of powers of distinct primes. Then, according to [M]



              $$varphi(m)=mleft(1-frac 1{p_1}right)dotsleft(1-frac 1{p_k}right)ge frac{cm}{log_e log_e m}.$$



              I guees that $c$ is a constant. Then more aysmpotically weak but more concrete lower bound for $varphi(m)$ follows from an observation that $klelog_2 m$ and $1-frac 1{p_i}ge 1-frac 1{i+1}$ for each $i$, so



              $$varphi(m)ge mfrac 12frac 23cdotsfrac k{k+1}=frac m{k+1}ge frac{m}{log_2 m +1}.$$



              References



              [L] Serge Lang, Algebra, Addison-Wesley Publ., 1965 (Russian translation, M.:Mir, 1968).



              [M] Mathematical encyclopedia, vol. 5, 1985 (in Russian).






              share|cite|improve this answer









              $endgroup$
















                1












                1








                1





                $begingroup$

                Too long for a comment.




                The only thing that can derail my argument in the blog is if the matrices have an eigen value, -1




                Then eigenvalue $−1$ of the matrix $M$ does not cause big problems for the investigation, because to it correspond an eigenvalue $1$ of the matrix $M^2$, and we can consider the convergence of odd and even powers separately. This is a direction in which I 'm going to continue my answer to the linked question.



                The same approach is applicable for eigenvalues $xi$ which are roots of unity of a small order $m$. Moreover, if all entries of $ntimes n$ matrix $M$ are rational, then order $[Bbb Q(xi):Bbb Q]$ of the extension $Bbb Q(xi)$ over $Bbb Q$ is at most $varphi(m)$, where $varphi$ is the Euler function (see, for instance, [L, Ch. VIII, $S$3, Theorem 6]). This allows us bound $m$ in terms of $n$ as follows.



                Let $m=p_1^{m_1}dots p_k^{m_k}$ be a product of powers of distinct primes. Then, according to [M]



                $$varphi(m)=mleft(1-frac 1{p_1}right)dotsleft(1-frac 1{p_k}right)ge frac{cm}{log_e log_e m}.$$



                I guees that $c$ is a constant. Then more aysmpotically weak but more concrete lower bound for $varphi(m)$ follows from an observation that $klelog_2 m$ and $1-frac 1{p_i}ge 1-frac 1{i+1}$ for each $i$, so



                $$varphi(m)ge mfrac 12frac 23cdotsfrac k{k+1}=frac m{k+1}ge frac{m}{log_2 m +1}.$$



                References



                [L] Serge Lang, Algebra, Addison-Wesley Publ., 1965 (Russian translation, M.:Mir, 1968).



                [M] Mathematical encyclopedia, vol. 5, 1985 (in Russian).






                share|cite|improve this answer









                $endgroup$



                Too long for a comment.




                The only thing that can derail my argument in the blog is if the matrices have an eigen value, -1




                Then eigenvalue $−1$ of the matrix $M$ does not cause big problems for the investigation, because to it correspond an eigenvalue $1$ of the matrix $M^2$, and we can consider the convergence of odd and even powers separately. This is a direction in which I 'm going to continue my answer to the linked question.



                The same approach is applicable for eigenvalues $xi$ which are roots of unity of a small order $m$. Moreover, if all entries of $ntimes n$ matrix $M$ are rational, then order $[Bbb Q(xi):Bbb Q]$ of the extension $Bbb Q(xi)$ over $Bbb Q$ is at most $varphi(m)$, where $varphi$ is the Euler function (see, for instance, [L, Ch. VIII, $S$3, Theorem 6]). This allows us bound $m$ in terms of $n$ as follows.



                Let $m=p_1^{m_1}dots p_k^{m_k}$ be a product of powers of distinct primes. Then, according to [M]



                $$varphi(m)=mleft(1-frac 1{p_1}right)dotsleft(1-frac 1{p_k}right)ge frac{cm}{log_e log_e m}.$$



                I guees that $c$ is a constant. Then more aysmpotically weak but more concrete lower bound for $varphi(m)$ follows from an observation that $klelog_2 m$ and $1-frac 1{p_i}ge 1-frac 1{i+1}$ for each $i$, so



                $$varphi(m)ge mfrac 12frac 23cdotsfrac k{k+1}=frac m{k+1}ge frac{m}{log_2 m +1}.$$



                References



                [L] Serge Lang, Algebra, Addison-Wesley Publ., 1965 (Russian translation, M.:Mir, 1968).



                [M] Mathematical encyclopedia, vol. 5, 1985 (in Russian).







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Jan 19 at 9:40









                Alex RavskyAlex Ravsky

                42.2k32383




                42.2k32383






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3077955%2fmarkov-matrices-with-no-ones-on-off-diagonals-dont-have-1-as-an-eigen-value%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    'app-layout' is not a known element: how to share Component with different Modules

                    android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

                    WPF add header to Image with URL pettitions [duplicate]