Rigorous Meaning of “Drawing a Sample” $omega$ from a Probability Space $(Omega, mathcal{A}, mathbb{P})$












0












$begingroup$


Let $(Omega, mathcal{A}, mathbb{P})$ be a probability space. What does it mean (in the most formal and rigorous sense possible) to "draw a sample" $omega in Omega$ from this space? Intuitively, I think I understand what is happening, but I am looking for a precise mathematical way of describing the process of sampling.



Kind regards and thank you very much!



Joker










share|cite|improve this question









$endgroup$

















    0












    $begingroup$


    Let $(Omega, mathcal{A}, mathbb{P})$ be a probability space. What does it mean (in the most formal and rigorous sense possible) to "draw a sample" $omega in Omega$ from this space? Intuitively, I think I understand what is happening, but I am looking for a precise mathematical way of describing the process of sampling.



    Kind regards and thank you very much!



    Joker










    share|cite|improve this question









    $endgroup$















      0












      0








      0





      $begingroup$


      Let $(Omega, mathcal{A}, mathbb{P})$ be a probability space. What does it mean (in the most formal and rigorous sense possible) to "draw a sample" $omega in Omega$ from this space? Intuitively, I think I understand what is happening, but I am looking for a precise mathematical way of describing the process of sampling.



      Kind regards and thank you very much!



      Joker










      share|cite|improve this question









      $endgroup$




      Let $(Omega, mathcal{A}, mathbb{P})$ be a probability space. What does it mean (in the most formal and rigorous sense possible) to "draw a sample" $omega in Omega$ from this space? Intuitively, I think I understand what is happening, but I am looking for a precise mathematical way of describing the process of sampling.



      Kind regards and thank you very much!



      Joker







      probability probability-theory sampling sampling-theory






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Jan 28 at 18:56









      Joker123Joker123

      534212




      534212






















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          I'm going to jump right in here and give a non-answer, since none of the experts seem to have anything to say. I asked almost exactly the same question here (What is a sample of a random variable?), and the answers I got were quite useful.



          One short version of the answer is "What are you going to use your sample for?"



          Suppose you say "Well, I've got a random variable $X$ defined on $Omega$, and I'd like to know whether, on average, $X$ for my sample will be larger than $17$."



          In that case, I'd say "Then you should compute $Bbb P{X > 17}$; you don't need to mention samples at all."



          In fact, it doesn't take long to get good at removing the word "sample" from most questions just like that --- it's a little like learning not to talk about the ether when you're discussing physics. :)






          share|cite|improve this answer









          $endgroup$





















            1












            $begingroup$

            There's probably no precise answer to this question, since "taking a sample", "observing the occurrence of a random variable" and "performing a random experiment" are just expressions we use in reference to real life actions that we interpret as taking note of the result of an experiment (in a broad sense, i.e. a procedure) whose result we can't completely predict beforehand. For this to make sense we first need to agree on the aspect of the final result that we are interested on; and also we could discuss the cause of the uncertainty. For instance, when throwing a coin in the air we could:




            • check if it eventually goes down in a reasonable amount of time (maybe a very long one, like an hour);

            • check if it touches the ground in a very precise amount of time (maybe a very small interval instead of an exact number), which we could calculate using some physic's laws and given that we know the initial height and velocity;

            • check if when touching the ground is heads or tails that looks upwards, maybe given that we know every detail about velocity, height, initial position, point where force is applied and the actual force, etc.;

            • the same but not having all that information.


            Which of these experiments are random and which are not? Well, perhaps there's no right answer. Of course, I tried to describe them from more to less predictable outcome (if we could think that is a matter of degree), but maybe we could agree that the first one is not random, while the last one is; the second might be, but the third one is difficult to actually perform, giving some chaotic dynamics involved. In fact, Bohr and Einstein had a famous controversy regarding this subject: basically, Einstein would say that the third one would not be random if we were good enough with our theories and predictions, while Bohr would say that all of them — maybe even the first one — are random.



            All this is just to explain that there's no clear notion — even less a definition — of what "random" means. In a sense, is just a consequence of our lack of knowledge or imprecision, although it could also be a fundamental characteristic of the universe and the way it works.



            So when we refer to a random experiment and a random event we can look at it in two ways:




            • before the experiment is performed, the event $A$ is an element of the $sigma$-algebra $mathcal A$, and so the value $P(A)$ is defined, where $Pcolon mathcal Alongrightarrow [0,1]$ is a function satisfying the usual properties of a probability;

            • after the experiment is performed we get as result a specific element of $Omega$, say $omega_0$, and we say that $A$ "occured", "happened", etc., if $omega_0in A$, and that it didn't otherwise.


            This goes all around the interpretation of probability theory, and usually there's not much formalization surrounding this, but what we can formalize is that performing a random experiment is selecting an element $omega_0$ of the random space $Omega$ (or maybe, letting the universe do it in a way that "respects" the probability law, whatever that means).



            In the same sense, observing the value of a random variable $X$ or registering the value of an occurrence of $X$ can be represented as the value $X(omega_0)$, where $omega_0$ is the result of the random experiment.



            And since a "random sample of size $n$" is just a collection of $n$ independent and identically distributed random variables, say
            $$X_1,X_2,ldots,X_n$$
            (also representable as a random vector), "drawing a sample" is performing the experiment (*) underlying the definition of the space $(Omega,mathcal A,P)$ in such a way that the independence and identical distribution assumptions are valid, to obtain a result $omega_0$. The observed sample is not a collection of random variables, but the $n$-tuple
            $$big(X_1(omega_0),X_2(omega_0),ldots,X_n(omega_0)big) in mathbb R^n.$$





            (*) Here, we must consider that the random experiment is, for instance, selecting $n$ persons who will answer a question, or any other action that results in $n$ results/elements of $Omega$.



            It is true that we could also think that there's only one variable $X$ defined on the space and that we perform the experiment $n$ times in order to get the $n$ results
            $$omega_1,omega_2,ldots,omega_n,$$
            and so drawing a sample gives us the $n$-tuple
            $$X(omega_1),X(omega_2),ldots,X(omega_n).$$
            But even in this case, we could define a new space taking
            $$tildeOmega=Omega^n=Omega times Omega times ldots times Omega quadtext{($n$ times)},$$
            with $tilde {mathcal A}$ as the product $sigma$-algebra and $tilde P$ according to the usual probability properties. In this way, performing the $n$ successive experiments represented by $(Omega,mathcal A,P)$ would be equivalent to performing just once the experiment represented by $(tildeOmega,tilde{mathcal A},tilde P)$.






            share|cite|improve this answer











            $endgroup$














              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "69"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3091252%2frigorous-meaning-of-drawing-a-sample-omega-from-a-probability-space-omeg%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              I'm going to jump right in here and give a non-answer, since none of the experts seem to have anything to say. I asked almost exactly the same question here (What is a sample of a random variable?), and the answers I got were quite useful.



              One short version of the answer is "What are you going to use your sample for?"



              Suppose you say "Well, I've got a random variable $X$ defined on $Omega$, and I'd like to know whether, on average, $X$ for my sample will be larger than $17$."



              In that case, I'd say "Then you should compute $Bbb P{X > 17}$; you don't need to mention samples at all."



              In fact, it doesn't take long to get good at removing the word "sample" from most questions just like that --- it's a little like learning not to talk about the ether when you're discussing physics. :)






              share|cite|improve this answer









              $endgroup$


















                1












                $begingroup$

                I'm going to jump right in here and give a non-answer, since none of the experts seem to have anything to say. I asked almost exactly the same question here (What is a sample of a random variable?), and the answers I got were quite useful.



                One short version of the answer is "What are you going to use your sample for?"



                Suppose you say "Well, I've got a random variable $X$ defined on $Omega$, and I'd like to know whether, on average, $X$ for my sample will be larger than $17$."



                In that case, I'd say "Then you should compute $Bbb P{X > 17}$; you don't need to mention samples at all."



                In fact, it doesn't take long to get good at removing the word "sample" from most questions just like that --- it's a little like learning not to talk about the ether when you're discussing physics. :)






                share|cite|improve this answer









                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  I'm going to jump right in here and give a non-answer, since none of the experts seem to have anything to say. I asked almost exactly the same question here (What is a sample of a random variable?), and the answers I got were quite useful.



                  One short version of the answer is "What are you going to use your sample for?"



                  Suppose you say "Well, I've got a random variable $X$ defined on $Omega$, and I'd like to know whether, on average, $X$ for my sample will be larger than $17$."



                  In that case, I'd say "Then you should compute $Bbb P{X > 17}$; you don't need to mention samples at all."



                  In fact, it doesn't take long to get good at removing the word "sample" from most questions just like that --- it's a little like learning not to talk about the ether when you're discussing physics. :)






                  share|cite|improve this answer









                  $endgroup$



                  I'm going to jump right in here and give a non-answer, since none of the experts seem to have anything to say. I asked almost exactly the same question here (What is a sample of a random variable?), and the answers I got were quite useful.



                  One short version of the answer is "What are you going to use your sample for?"



                  Suppose you say "Well, I've got a random variable $X$ defined on $Omega$, and I'd like to know whether, on average, $X$ for my sample will be larger than $17$."



                  In that case, I'd say "Then you should compute $Bbb P{X > 17}$; you don't need to mention samples at all."



                  In fact, it doesn't take long to get good at removing the word "sample" from most questions just like that --- it's a little like learning not to talk about the ether when you're discussing physics. :)







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jan 28 at 23:47









                  John HughesJohn Hughes

                  65.1k24292




                  65.1k24292























                      1












                      $begingroup$

                      There's probably no precise answer to this question, since "taking a sample", "observing the occurrence of a random variable" and "performing a random experiment" are just expressions we use in reference to real life actions that we interpret as taking note of the result of an experiment (in a broad sense, i.e. a procedure) whose result we can't completely predict beforehand. For this to make sense we first need to agree on the aspect of the final result that we are interested on; and also we could discuss the cause of the uncertainty. For instance, when throwing a coin in the air we could:




                      • check if it eventually goes down in a reasonable amount of time (maybe a very long one, like an hour);

                      • check if it touches the ground in a very precise amount of time (maybe a very small interval instead of an exact number), which we could calculate using some physic's laws and given that we know the initial height and velocity;

                      • check if when touching the ground is heads or tails that looks upwards, maybe given that we know every detail about velocity, height, initial position, point where force is applied and the actual force, etc.;

                      • the same but not having all that information.


                      Which of these experiments are random and which are not? Well, perhaps there's no right answer. Of course, I tried to describe them from more to less predictable outcome (if we could think that is a matter of degree), but maybe we could agree that the first one is not random, while the last one is; the second might be, but the third one is difficult to actually perform, giving some chaotic dynamics involved. In fact, Bohr and Einstein had a famous controversy regarding this subject: basically, Einstein would say that the third one would not be random if we were good enough with our theories and predictions, while Bohr would say that all of them — maybe even the first one — are random.



                      All this is just to explain that there's no clear notion — even less a definition — of what "random" means. In a sense, is just a consequence of our lack of knowledge or imprecision, although it could also be a fundamental characteristic of the universe and the way it works.



                      So when we refer to a random experiment and a random event we can look at it in two ways:




                      • before the experiment is performed, the event $A$ is an element of the $sigma$-algebra $mathcal A$, and so the value $P(A)$ is defined, where $Pcolon mathcal Alongrightarrow [0,1]$ is a function satisfying the usual properties of a probability;

                      • after the experiment is performed we get as result a specific element of $Omega$, say $omega_0$, and we say that $A$ "occured", "happened", etc., if $omega_0in A$, and that it didn't otherwise.


                      This goes all around the interpretation of probability theory, and usually there's not much formalization surrounding this, but what we can formalize is that performing a random experiment is selecting an element $omega_0$ of the random space $Omega$ (or maybe, letting the universe do it in a way that "respects" the probability law, whatever that means).



                      In the same sense, observing the value of a random variable $X$ or registering the value of an occurrence of $X$ can be represented as the value $X(omega_0)$, where $omega_0$ is the result of the random experiment.



                      And since a "random sample of size $n$" is just a collection of $n$ independent and identically distributed random variables, say
                      $$X_1,X_2,ldots,X_n$$
                      (also representable as a random vector), "drawing a sample" is performing the experiment (*) underlying the definition of the space $(Omega,mathcal A,P)$ in such a way that the independence and identical distribution assumptions are valid, to obtain a result $omega_0$. The observed sample is not a collection of random variables, but the $n$-tuple
                      $$big(X_1(omega_0),X_2(omega_0),ldots,X_n(omega_0)big) in mathbb R^n.$$





                      (*) Here, we must consider that the random experiment is, for instance, selecting $n$ persons who will answer a question, or any other action that results in $n$ results/elements of $Omega$.



                      It is true that we could also think that there's only one variable $X$ defined on the space and that we perform the experiment $n$ times in order to get the $n$ results
                      $$omega_1,omega_2,ldots,omega_n,$$
                      and so drawing a sample gives us the $n$-tuple
                      $$X(omega_1),X(omega_2),ldots,X(omega_n).$$
                      But even in this case, we could define a new space taking
                      $$tildeOmega=Omega^n=Omega times Omega times ldots times Omega quadtext{($n$ times)},$$
                      with $tilde {mathcal A}$ as the product $sigma$-algebra and $tilde P$ according to the usual probability properties. In this way, performing the $n$ successive experiments represented by $(Omega,mathcal A,P)$ would be equivalent to performing just once the experiment represented by $(tildeOmega,tilde{mathcal A},tilde P)$.






                      share|cite|improve this answer











                      $endgroup$


















                        1












                        $begingroup$

                        There's probably no precise answer to this question, since "taking a sample", "observing the occurrence of a random variable" and "performing a random experiment" are just expressions we use in reference to real life actions that we interpret as taking note of the result of an experiment (in a broad sense, i.e. a procedure) whose result we can't completely predict beforehand. For this to make sense we first need to agree on the aspect of the final result that we are interested on; and also we could discuss the cause of the uncertainty. For instance, when throwing a coin in the air we could:




                        • check if it eventually goes down in a reasonable amount of time (maybe a very long one, like an hour);

                        • check if it touches the ground in a very precise amount of time (maybe a very small interval instead of an exact number), which we could calculate using some physic's laws and given that we know the initial height and velocity;

                        • check if when touching the ground is heads or tails that looks upwards, maybe given that we know every detail about velocity, height, initial position, point where force is applied and the actual force, etc.;

                        • the same but not having all that information.


                        Which of these experiments are random and which are not? Well, perhaps there's no right answer. Of course, I tried to describe them from more to less predictable outcome (if we could think that is a matter of degree), but maybe we could agree that the first one is not random, while the last one is; the second might be, but the third one is difficult to actually perform, giving some chaotic dynamics involved. In fact, Bohr and Einstein had a famous controversy regarding this subject: basically, Einstein would say that the third one would not be random if we were good enough with our theories and predictions, while Bohr would say that all of them — maybe even the first one — are random.



                        All this is just to explain that there's no clear notion — even less a definition — of what "random" means. In a sense, is just a consequence of our lack of knowledge or imprecision, although it could also be a fundamental characteristic of the universe and the way it works.



                        So when we refer to a random experiment and a random event we can look at it in two ways:




                        • before the experiment is performed, the event $A$ is an element of the $sigma$-algebra $mathcal A$, and so the value $P(A)$ is defined, where $Pcolon mathcal Alongrightarrow [0,1]$ is a function satisfying the usual properties of a probability;

                        • after the experiment is performed we get as result a specific element of $Omega$, say $omega_0$, and we say that $A$ "occured", "happened", etc., if $omega_0in A$, and that it didn't otherwise.


                        This goes all around the interpretation of probability theory, and usually there's not much formalization surrounding this, but what we can formalize is that performing a random experiment is selecting an element $omega_0$ of the random space $Omega$ (or maybe, letting the universe do it in a way that "respects" the probability law, whatever that means).



                        In the same sense, observing the value of a random variable $X$ or registering the value of an occurrence of $X$ can be represented as the value $X(omega_0)$, where $omega_0$ is the result of the random experiment.



                        And since a "random sample of size $n$" is just a collection of $n$ independent and identically distributed random variables, say
                        $$X_1,X_2,ldots,X_n$$
                        (also representable as a random vector), "drawing a sample" is performing the experiment (*) underlying the definition of the space $(Omega,mathcal A,P)$ in such a way that the independence and identical distribution assumptions are valid, to obtain a result $omega_0$. The observed sample is not a collection of random variables, but the $n$-tuple
                        $$big(X_1(omega_0),X_2(omega_0),ldots,X_n(omega_0)big) in mathbb R^n.$$





                        (*) Here, we must consider that the random experiment is, for instance, selecting $n$ persons who will answer a question, or any other action that results in $n$ results/elements of $Omega$.



                        It is true that we could also think that there's only one variable $X$ defined on the space and that we perform the experiment $n$ times in order to get the $n$ results
                        $$omega_1,omega_2,ldots,omega_n,$$
                        and so drawing a sample gives us the $n$-tuple
                        $$X(omega_1),X(omega_2),ldots,X(omega_n).$$
                        But even in this case, we could define a new space taking
                        $$tildeOmega=Omega^n=Omega times Omega times ldots times Omega quadtext{($n$ times)},$$
                        with $tilde {mathcal A}$ as the product $sigma$-algebra and $tilde P$ according to the usual probability properties. In this way, performing the $n$ successive experiments represented by $(Omega,mathcal A,P)$ would be equivalent to performing just once the experiment represented by $(tildeOmega,tilde{mathcal A},tilde P)$.






                        share|cite|improve this answer











                        $endgroup$
















                          1












                          1








                          1





                          $begingroup$

                          There's probably no precise answer to this question, since "taking a sample", "observing the occurrence of a random variable" and "performing a random experiment" are just expressions we use in reference to real life actions that we interpret as taking note of the result of an experiment (in a broad sense, i.e. a procedure) whose result we can't completely predict beforehand. For this to make sense we first need to agree on the aspect of the final result that we are interested on; and also we could discuss the cause of the uncertainty. For instance, when throwing a coin in the air we could:




                          • check if it eventually goes down in a reasonable amount of time (maybe a very long one, like an hour);

                          • check if it touches the ground in a very precise amount of time (maybe a very small interval instead of an exact number), which we could calculate using some physic's laws and given that we know the initial height and velocity;

                          • check if when touching the ground is heads or tails that looks upwards, maybe given that we know every detail about velocity, height, initial position, point where force is applied and the actual force, etc.;

                          • the same but not having all that information.


                          Which of these experiments are random and which are not? Well, perhaps there's no right answer. Of course, I tried to describe them from more to less predictable outcome (if we could think that is a matter of degree), but maybe we could agree that the first one is not random, while the last one is; the second might be, but the third one is difficult to actually perform, giving some chaotic dynamics involved. In fact, Bohr and Einstein had a famous controversy regarding this subject: basically, Einstein would say that the third one would not be random if we were good enough with our theories and predictions, while Bohr would say that all of them — maybe even the first one — are random.



                          All this is just to explain that there's no clear notion — even less a definition — of what "random" means. In a sense, is just a consequence of our lack of knowledge or imprecision, although it could also be a fundamental characteristic of the universe and the way it works.



                          So when we refer to a random experiment and a random event we can look at it in two ways:




                          • before the experiment is performed, the event $A$ is an element of the $sigma$-algebra $mathcal A$, and so the value $P(A)$ is defined, where $Pcolon mathcal Alongrightarrow [0,1]$ is a function satisfying the usual properties of a probability;

                          • after the experiment is performed we get as result a specific element of $Omega$, say $omega_0$, and we say that $A$ "occured", "happened", etc., if $omega_0in A$, and that it didn't otherwise.


                          This goes all around the interpretation of probability theory, and usually there's not much formalization surrounding this, but what we can formalize is that performing a random experiment is selecting an element $omega_0$ of the random space $Omega$ (or maybe, letting the universe do it in a way that "respects" the probability law, whatever that means).



                          In the same sense, observing the value of a random variable $X$ or registering the value of an occurrence of $X$ can be represented as the value $X(omega_0)$, where $omega_0$ is the result of the random experiment.



                          And since a "random sample of size $n$" is just a collection of $n$ independent and identically distributed random variables, say
                          $$X_1,X_2,ldots,X_n$$
                          (also representable as a random vector), "drawing a sample" is performing the experiment (*) underlying the definition of the space $(Omega,mathcal A,P)$ in such a way that the independence and identical distribution assumptions are valid, to obtain a result $omega_0$. The observed sample is not a collection of random variables, but the $n$-tuple
                          $$big(X_1(omega_0),X_2(omega_0),ldots,X_n(omega_0)big) in mathbb R^n.$$





                          (*) Here, we must consider that the random experiment is, for instance, selecting $n$ persons who will answer a question, or any other action that results in $n$ results/elements of $Omega$.



                          It is true that we could also think that there's only one variable $X$ defined on the space and that we perform the experiment $n$ times in order to get the $n$ results
                          $$omega_1,omega_2,ldots,omega_n,$$
                          and so drawing a sample gives us the $n$-tuple
                          $$X(omega_1),X(omega_2),ldots,X(omega_n).$$
                          But even in this case, we could define a new space taking
                          $$tildeOmega=Omega^n=Omega times Omega times ldots times Omega quadtext{($n$ times)},$$
                          with $tilde {mathcal A}$ as the product $sigma$-algebra and $tilde P$ according to the usual probability properties. In this way, performing the $n$ successive experiments represented by $(Omega,mathcal A,P)$ would be equivalent to performing just once the experiment represented by $(tildeOmega,tilde{mathcal A},tilde P)$.






                          share|cite|improve this answer











                          $endgroup$



                          There's probably no precise answer to this question, since "taking a sample", "observing the occurrence of a random variable" and "performing a random experiment" are just expressions we use in reference to real life actions that we interpret as taking note of the result of an experiment (in a broad sense, i.e. a procedure) whose result we can't completely predict beforehand. For this to make sense we first need to agree on the aspect of the final result that we are interested on; and also we could discuss the cause of the uncertainty. For instance, when throwing a coin in the air we could:




                          • check if it eventually goes down in a reasonable amount of time (maybe a very long one, like an hour);

                          • check if it touches the ground in a very precise amount of time (maybe a very small interval instead of an exact number), which we could calculate using some physic's laws and given that we know the initial height and velocity;

                          • check if when touching the ground is heads or tails that looks upwards, maybe given that we know every detail about velocity, height, initial position, point where force is applied and the actual force, etc.;

                          • the same but not having all that information.


                          Which of these experiments are random and which are not? Well, perhaps there's no right answer. Of course, I tried to describe them from more to less predictable outcome (if we could think that is a matter of degree), but maybe we could agree that the first one is not random, while the last one is; the second might be, but the third one is difficult to actually perform, giving some chaotic dynamics involved. In fact, Bohr and Einstein had a famous controversy regarding this subject: basically, Einstein would say that the third one would not be random if we were good enough with our theories and predictions, while Bohr would say that all of them — maybe even the first one — are random.



                          All this is just to explain that there's no clear notion — even less a definition — of what "random" means. In a sense, is just a consequence of our lack of knowledge or imprecision, although it could also be a fundamental characteristic of the universe and the way it works.



                          So when we refer to a random experiment and a random event we can look at it in two ways:




                          • before the experiment is performed, the event $A$ is an element of the $sigma$-algebra $mathcal A$, and so the value $P(A)$ is defined, where $Pcolon mathcal Alongrightarrow [0,1]$ is a function satisfying the usual properties of a probability;

                          • after the experiment is performed we get as result a specific element of $Omega$, say $omega_0$, and we say that $A$ "occured", "happened", etc., if $omega_0in A$, and that it didn't otherwise.


                          This goes all around the interpretation of probability theory, and usually there's not much formalization surrounding this, but what we can formalize is that performing a random experiment is selecting an element $omega_0$ of the random space $Omega$ (or maybe, letting the universe do it in a way that "respects" the probability law, whatever that means).



                          In the same sense, observing the value of a random variable $X$ or registering the value of an occurrence of $X$ can be represented as the value $X(omega_0)$, where $omega_0$ is the result of the random experiment.



                          And since a "random sample of size $n$" is just a collection of $n$ independent and identically distributed random variables, say
                          $$X_1,X_2,ldots,X_n$$
                          (also representable as a random vector), "drawing a sample" is performing the experiment (*) underlying the definition of the space $(Omega,mathcal A,P)$ in such a way that the independence and identical distribution assumptions are valid, to obtain a result $omega_0$. The observed sample is not a collection of random variables, but the $n$-tuple
                          $$big(X_1(omega_0),X_2(omega_0),ldots,X_n(omega_0)big) in mathbb R^n.$$





                          (*) Here, we must consider that the random experiment is, for instance, selecting $n$ persons who will answer a question, or any other action that results in $n$ results/elements of $Omega$.



                          It is true that we could also think that there's only one variable $X$ defined on the space and that we perform the experiment $n$ times in order to get the $n$ results
                          $$omega_1,omega_2,ldots,omega_n,$$
                          and so drawing a sample gives us the $n$-tuple
                          $$X(omega_1),X(omega_2),ldots,X(omega_n).$$
                          But even in this case, we could define a new space taking
                          $$tildeOmega=Omega^n=Omega times Omega times ldots times Omega quadtext{($n$ times)},$$
                          with $tilde {mathcal A}$ as the product $sigma$-algebra and $tilde P$ according to the usual probability properties. In this way, performing the $n$ successive experiments represented by $(Omega,mathcal A,P)$ would be equivalent to performing just once the experiment represented by $(tildeOmega,tilde{mathcal A},tilde P)$.







                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited Jan 29 at 19:27

























                          answered Jan 29 at 1:34









                          Alejandro Nasif SalumAlejandro Nasif Salum

                          4,775118




                          4,775118






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3091252%2frigorous-meaning-of-drawing-a-sample-omega-from-a-probability-space-omeg%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              MongoDB - Not Authorized To Execute Command

                              How to fix TextFormField cause rebuild widget in Flutter

                              in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith