Calculating the Standard Error for a one sample T-test: σ/sqrt(n) or s/sqrt(n)?











up vote
0
down vote

favorite












Consider a case when you are looking to test whether a small sample deviates significant from the population (normally distributed) whence it is drawn. Both the population standard deviation (σ) and mean (μ) are known, as are the sample standard deviation (s) and mean (Xbar).



Because the sample is small, you will need to use a t-test. However, my question is, do we calculate the standard error as s/sqrt(n) or σ/sqrt(n)? With a z-test, we usually use σ if we know it, but I'm wondering if the same applies for a t-test.










share|cite|improve this question














bumped to the homepage by Community 2 days ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.



















    up vote
    0
    down vote

    favorite












    Consider a case when you are looking to test whether a small sample deviates significant from the population (normally distributed) whence it is drawn. Both the population standard deviation (σ) and mean (μ) are known, as are the sample standard deviation (s) and mean (Xbar).



    Because the sample is small, you will need to use a t-test. However, my question is, do we calculate the standard error as s/sqrt(n) or σ/sqrt(n)? With a z-test, we usually use σ if we know it, but I'm wondering if the same applies for a t-test.










    share|cite|improve this question














    bumped to the homepage by Community 2 days ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.

















      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      Consider a case when you are looking to test whether a small sample deviates significant from the population (normally distributed) whence it is drawn. Both the population standard deviation (σ) and mean (μ) are known, as are the sample standard deviation (s) and mean (Xbar).



      Because the sample is small, you will need to use a t-test. However, my question is, do we calculate the standard error as s/sqrt(n) or σ/sqrt(n)? With a z-test, we usually use σ if we know it, but I'm wondering if the same applies for a t-test.










      share|cite|improve this question













      Consider a case when you are looking to test whether a small sample deviates significant from the population (normally distributed) whence it is drawn. Both the population standard deviation (σ) and mean (μ) are known, as are the sample standard deviation (s) and mean (Xbar).



      Because the sample is small, you will need to use a t-test. However, my question is, do we calculate the standard error as s/sqrt(n) or σ/sqrt(n)? With a z-test, we usually use σ if we know it, but I'm wondering if the same applies for a t-test.







      statistics statistical-inference hypothesis-testing






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Oct 18 '15 at 22:41









      cybervision

      11




      11





      bumped to the homepage by Community 2 days ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 2 days ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          Strictly, speaking, if both $mu$ and $sigma$ are known,
          you have no reason to do a test.



          I think you must mean that you want to test the null hypothesis
          $H_0: mu = mu_0$ against the alternative $H_a: mu ne mu_0$
          and that $mu_0$ is a specified number.



          z-test. If the numerical value of $sigma$ is known, there is no need
          to estimate it using the sample standard deviation $S$.
          In that case you would have a z-test, with test statistic
          $$Z = frac{bar X - mu_0}{sigma/sqrt{n}},$$
          where $Z$ has a standard normal distribution if $H_0$ is
          true. Then you would reject $H_0$ at the 5% level of
          significance if $|Z| > 1.96.$



          t test. If the numerical value of $sigma$ is not known, then you
          would use a t-test, with test statistic
          $$T = frac{bar X - mu_0}{S/sqrt{n}},$$
          where $T$ has Student's t distribution with $n - 1$
          degrees of freedom if $H_0$ is true.
          Then you would reject $H_0$ at the 5% level of significance
          if $|T| > t^*,$ where $t^*$ (obtained from tables)
          cuts 2.5% of the area from the upper tail of Student's
          t distribution with $n - 1$ degrees of freedom.



          Distinction between z-test and t-test. For $n > 30,$ you will find that the tabled value $t^*$
          is just a bit larger than 1.96. This leads some authors
          to say you should use a t test only if $n$ is small.



          However, if you use software, you will find that whenever
          you do a z test, you will be asked for the numerical
          value of $sigma.$ Also, the "rule of 30" really only
          works for testing at the 5% level of significance. [At the
          1% level, it would be the (seldom mentioned) "rule of 120."
          And if you're looking at P-values, no such rule suffices.]



          The best rule for z-test vs. t-test is very simple:




          If the numerical value of $sigma$ is known, then use
          a z-test. If $sigma$ is not known, then it is estimated
          by $S$ and you will use a t-test. The distinction
          has to do purely with whether $sigma$ is known; it
          really has nothing to do with sample size.







          share|cite|improve this answer























            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














             

            draft saved


            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1486606%2fcalculating-the-standard-error-for-a-one-sample-t-test-%25cf%2583-sqrtn-or-s-sqrtn%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote













            Strictly, speaking, if both $mu$ and $sigma$ are known,
            you have no reason to do a test.



            I think you must mean that you want to test the null hypothesis
            $H_0: mu = mu_0$ against the alternative $H_a: mu ne mu_0$
            and that $mu_0$ is a specified number.



            z-test. If the numerical value of $sigma$ is known, there is no need
            to estimate it using the sample standard deviation $S$.
            In that case you would have a z-test, with test statistic
            $$Z = frac{bar X - mu_0}{sigma/sqrt{n}},$$
            where $Z$ has a standard normal distribution if $H_0$ is
            true. Then you would reject $H_0$ at the 5% level of
            significance if $|Z| > 1.96.$



            t test. If the numerical value of $sigma$ is not known, then you
            would use a t-test, with test statistic
            $$T = frac{bar X - mu_0}{S/sqrt{n}},$$
            where $T$ has Student's t distribution with $n - 1$
            degrees of freedom if $H_0$ is true.
            Then you would reject $H_0$ at the 5% level of significance
            if $|T| > t^*,$ where $t^*$ (obtained from tables)
            cuts 2.5% of the area from the upper tail of Student's
            t distribution with $n - 1$ degrees of freedom.



            Distinction between z-test and t-test. For $n > 30,$ you will find that the tabled value $t^*$
            is just a bit larger than 1.96. This leads some authors
            to say you should use a t test only if $n$ is small.



            However, if you use software, you will find that whenever
            you do a z test, you will be asked for the numerical
            value of $sigma.$ Also, the "rule of 30" really only
            works for testing at the 5% level of significance. [At the
            1% level, it would be the (seldom mentioned) "rule of 120."
            And if you're looking at P-values, no such rule suffices.]



            The best rule for z-test vs. t-test is very simple:




            If the numerical value of $sigma$ is known, then use
            a z-test. If $sigma$ is not known, then it is estimated
            by $S$ and you will use a t-test. The distinction
            has to do purely with whether $sigma$ is known; it
            really has nothing to do with sample size.







            share|cite|improve this answer



























              up vote
              0
              down vote













              Strictly, speaking, if both $mu$ and $sigma$ are known,
              you have no reason to do a test.



              I think you must mean that you want to test the null hypothesis
              $H_0: mu = mu_0$ against the alternative $H_a: mu ne mu_0$
              and that $mu_0$ is a specified number.



              z-test. If the numerical value of $sigma$ is known, there is no need
              to estimate it using the sample standard deviation $S$.
              In that case you would have a z-test, with test statistic
              $$Z = frac{bar X - mu_0}{sigma/sqrt{n}},$$
              where $Z$ has a standard normal distribution if $H_0$ is
              true. Then you would reject $H_0$ at the 5% level of
              significance if $|Z| > 1.96.$



              t test. If the numerical value of $sigma$ is not known, then you
              would use a t-test, with test statistic
              $$T = frac{bar X - mu_0}{S/sqrt{n}},$$
              where $T$ has Student's t distribution with $n - 1$
              degrees of freedom if $H_0$ is true.
              Then you would reject $H_0$ at the 5% level of significance
              if $|T| > t^*,$ where $t^*$ (obtained from tables)
              cuts 2.5% of the area from the upper tail of Student's
              t distribution with $n - 1$ degrees of freedom.



              Distinction between z-test and t-test. For $n > 30,$ you will find that the tabled value $t^*$
              is just a bit larger than 1.96. This leads some authors
              to say you should use a t test only if $n$ is small.



              However, if you use software, you will find that whenever
              you do a z test, you will be asked for the numerical
              value of $sigma.$ Also, the "rule of 30" really only
              works for testing at the 5% level of significance. [At the
              1% level, it would be the (seldom mentioned) "rule of 120."
              And if you're looking at P-values, no such rule suffices.]



              The best rule for z-test vs. t-test is very simple:




              If the numerical value of $sigma$ is known, then use
              a z-test. If $sigma$ is not known, then it is estimated
              by $S$ and you will use a t-test. The distinction
              has to do purely with whether $sigma$ is known; it
              really has nothing to do with sample size.







              share|cite|improve this answer

























                up vote
                0
                down vote










                up vote
                0
                down vote









                Strictly, speaking, if both $mu$ and $sigma$ are known,
                you have no reason to do a test.



                I think you must mean that you want to test the null hypothesis
                $H_0: mu = mu_0$ against the alternative $H_a: mu ne mu_0$
                and that $mu_0$ is a specified number.



                z-test. If the numerical value of $sigma$ is known, there is no need
                to estimate it using the sample standard deviation $S$.
                In that case you would have a z-test, with test statistic
                $$Z = frac{bar X - mu_0}{sigma/sqrt{n}},$$
                where $Z$ has a standard normal distribution if $H_0$ is
                true. Then you would reject $H_0$ at the 5% level of
                significance if $|Z| > 1.96.$



                t test. If the numerical value of $sigma$ is not known, then you
                would use a t-test, with test statistic
                $$T = frac{bar X - mu_0}{S/sqrt{n}},$$
                where $T$ has Student's t distribution with $n - 1$
                degrees of freedom if $H_0$ is true.
                Then you would reject $H_0$ at the 5% level of significance
                if $|T| > t^*,$ where $t^*$ (obtained from tables)
                cuts 2.5% of the area from the upper tail of Student's
                t distribution with $n - 1$ degrees of freedom.



                Distinction between z-test and t-test. For $n > 30,$ you will find that the tabled value $t^*$
                is just a bit larger than 1.96. This leads some authors
                to say you should use a t test only if $n$ is small.



                However, if you use software, you will find that whenever
                you do a z test, you will be asked for the numerical
                value of $sigma.$ Also, the "rule of 30" really only
                works for testing at the 5% level of significance. [At the
                1% level, it would be the (seldom mentioned) "rule of 120."
                And if you're looking at P-values, no such rule suffices.]



                The best rule for z-test vs. t-test is very simple:




                If the numerical value of $sigma$ is known, then use
                a z-test. If $sigma$ is not known, then it is estimated
                by $S$ and you will use a t-test. The distinction
                has to do purely with whether $sigma$ is known; it
                really has nothing to do with sample size.







                share|cite|improve this answer














                Strictly, speaking, if both $mu$ and $sigma$ are known,
                you have no reason to do a test.



                I think you must mean that you want to test the null hypothesis
                $H_0: mu = mu_0$ against the alternative $H_a: mu ne mu_0$
                and that $mu_0$ is a specified number.



                z-test. If the numerical value of $sigma$ is known, there is no need
                to estimate it using the sample standard deviation $S$.
                In that case you would have a z-test, with test statistic
                $$Z = frac{bar X - mu_0}{sigma/sqrt{n}},$$
                where $Z$ has a standard normal distribution if $H_0$ is
                true. Then you would reject $H_0$ at the 5% level of
                significance if $|Z| > 1.96.$



                t test. If the numerical value of $sigma$ is not known, then you
                would use a t-test, with test statistic
                $$T = frac{bar X - mu_0}{S/sqrt{n}},$$
                where $T$ has Student's t distribution with $n - 1$
                degrees of freedom if $H_0$ is true.
                Then you would reject $H_0$ at the 5% level of significance
                if $|T| > t^*,$ where $t^*$ (obtained from tables)
                cuts 2.5% of the area from the upper tail of Student's
                t distribution with $n - 1$ degrees of freedom.



                Distinction between z-test and t-test. For $n > 30,$ you will find that the tabled value $t^*$
                is just a bit larger than 1.96. This leads some authors
                to say you should use a t test only if $n$ is small.



                However, if you use software, you will find that whenever
                you do a z test, you will be asked for the numerical
                value of $sigma.$ Also, the "rule of 30" really only
                works for testing at the 5% level of significance. [At the
                1% level, it would be the (seldom mentioned) "rule of 120."
                And if you're looking at P-values, no such rule suffices.]



                The best rule for z-test vs. t-test is very simple:




                If the numerical value of $sigma$ is known, then use
                a z-test. If $sigma$ is not known, then it is estimated
                by $S$ and you will use a t-test. The distinction
                has to do purely with whether $sigma$ is known; it
                really has nothing to do with sample size.








                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Oct 19 '15 at 0:27

























                answered Oct 19 '15 at 0:13









                BruceET

                34.7k71440




                34.7k71440






























                     

                    draft saved


                    draft discarded



















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1486606%2fcalculating-the-standard-error-for-a-one-sample-t-test-%25cf%2583-sqrtn-or-s-sqrtn%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

                    Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

                    A Topological Invariant for $pi_3(U(n))$