$O_P(cdot)$ and linearization (Taylor) under an expectation












0












$begingroup$


I have two independent random variable, $X,Y$, respectively standard $n$-dimensional Gaussian and uniform on ${-1,1}^n$. I let $X' := X/sqrt{n}$, so that $X' = O_p(1)$.



(here $O_P(cdot)$ refers to stochastic boundedness ("big Oh in probability"))



I am confronted to an expression of the form
$$
mathbb{E}[ Phi(X)mathbb{E}[ Psi(alphacdotlangle X',Yrangle)mid X]
$$

where $alpha = o(1)$, and therefore $alphacdotlangle X',Yrangle = o_P(1)$ (all the stochastic $O_P,o_p$ are wrt $X$). What I would like to do is to do a Taylor expansion of $ Psi$ to second order inside the inner expectation.



My question: is that "legit"? Are there more assumptions I need to check?










share|cite|improve this question











$endgroup$

















    0












    $begingroup$


    I have two independent random variable, $X,Y$, respectively standard $n$-dimensional Gaussian and uniform on ${-1,1}^n$. I let $X' := X/sqrt{n}$, so that $X' = O_p(1)$.



    (here $O_P(cdot)$ refers to stochastic boundedness ("big Oh in probability"))



    I am confronted to an expression of the form
    $$
    mathbb{E}[ Phi(X)mathbb{E}[ Psi(alphacdotlangle X',Yrangle)mid X]
    $$

    where $alpha = o(1)$, and therefore $alphacdotlangle X',Yrangle = o_P(1)$ (all the stochastic $O_P,o_p$ are wrt $X$). What I would like to do is to do a Taylor expansion of $ Psi$ to second order inside the inner expectation.



    My question: is that "legit"? Are there more assumptions I need to check?










    share|cite|improve this question











    $endgroup$















      0












      0








      0





      $begingroup$


      I have two independent random variable, $X,Y$, respectively standard $n$-dimensional Gaussian and uniform on ${-1,1}^n$. I let $X' := X/sqrt{n}$, so that $X' = O_p(1)$.



      (here $O_P(cdot)$ refers to stochastic boundedness ("big Oh in probability"))



      I am confronted to an expression of the form
      $$
      mathbb{E}[ Phi(X)mathbb{E}[ Psi(alphacdotlangle X',Yrangle)mid X]
      $$

      where $alpha = o(1)$, and therefore $alphacdotlangle X',Yrangle = o_P(1)$ (all the stochastic $O_P,o_p$ are wrt $X$). What I would like to do is to do a Taylor expansion of $ Psi$ to second order inside the inner expectation.



      My question: is that "legit"? Are there more assumptions I need to check?










      share|cite|improve this question











      $endgroup$




      I have two independent random variable, $X,Y$, respectively standard $n$-dimensional Gaussian and uniform on ${-1,1}^n$. I let $X' := X/sqrt{n}$, so that $X' = O_p(1)$.



      (here $O_P(cdot)$ refers to stochastic boundedness ("big Oh in probability"))



      I am confronted to an expression of the form
      $$
      mathbb{E}[ Phi(X)mathbb{E}[ Psi(alphacdotlangle X',Yrangle)mid X]
      $$

      where $alpha = o(1)$, and therefore $alphacdotlangle X',Yrangle = o_P(1)$ (all the stochastic $O_P,o_p$ are wrt $X$). What I would like to do is to do a Taylor expansion of $ Psi$ to second order inside the inner expectation.



      My question: is that "legit"? Are there more assumptions I need to check?







      probability asymptotics






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 10 at 4:42







      Clement C.

















      asked Jan 9 at 12:24









      Clement C.Clement C.

      50.2k33889




      50.2k33889






















          1 Answer
          1






          active

          oldest

          votes


















          1





          +50







          $begingroup$

          If $Psi$ is second-order differentiable, then it makes no difference. However, if $Psi$ is a random function, then you may have more issues. For example, if $Psi(X) = 1$ for almost all realizations of $X$, then there is no harm in doing a Taylor expansion.



          However, knowing that $alpha cdot langle X', Y rangle = o_P(1)$ may not be enough to calculate the expectation. I would assume you want to show something like the fact that the expectation tends to zero, but knowing probability bounds is not quite enough. Essentially, while your expression is $o_P(1)$, you do not know about the part that is not covered by the probability. You may need to appeal to Slutsky's theorem (or something similar).



          To elaborate, suppose $|Z| = o_P(1)$. Then even a simple calculation shows begin{align} E|Z| &= int_{|Z| < varepsilon}|Z| dP + int_{|Z| geq varepsilon} |Z|dP\
          &leq varepsilon P(|Z| < varepsilon) + int_{|Z| geq varepsilon}|Z|dP.
          end{align}

          Even though the first term above is tending to one, knowing $|Z| = o_P(1)$ is not enough to show that the expectation also tends to zero (although it still may under some regularity conditions).






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            What do you mean with your first sentence, exactly? I don't get it.
            $endgroup$
            – Clement C.
            Jan 16 at 17:09










          • $begingroup$
            If $Psi: mathbb{R} to mathbb{R}$ is twice continuously differentiable, then no matter what values of $X$ are realized, $Psi$ will still be differentiable. You can always do a Taylor expansion, you just need to check that $X$ falls into the open interval in which you are doing such an expansion with high probability.
            $endgroup$
            – OldGodzilla
            Jan 16 at 17:17










          • $begingroup$
            But my question (in case it wasn't clear, sorry) is about under what assumptions I can integrate this $o_p$ or $O_p$. I was not worried about the expansion itself, but about what happens to it when taking its expectation.
            $endgroup$
            – Clement C.
            Jan 16 at 17:21










          • $begingroup$
            See my example above. In general, you can't (convergence in probability does not imply convergence in expectation). If you are trying to show convergence in distribution, then it is enough to show convergence in probability and appeal to Slutsky's theorem (for example).
            $endgroup$
            – OldGodzilla
            Jan 16 at 18:18










          • $begingroup$
            Thank you. ${}{}{}$
            $endgroup$
            – Clement C.
            Jan 20 at 3:31











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3067392%2fo-p-cdot-and-linearization-taylor-under-an-expectation%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1





          +50







          $begingroup$

          If $Psi$ is second-order differentiable, then it makes no difference. However, if $Psi$ is a random function, then you may have more issues. For example, if $Psi(X) = 1$ for almost all realizations of $X$, then there is no harm in doing a Taylor expansion.



          However, knowing that $alpha cdot langle X', Y rangle = o_P(1)$ may not be enough to calculate the expectation. I would assume you want to show something like the fact that the expectation tends to zero, but knowing probability bounds is not quite enough. Essentially, while your expression is $o_P(1)$, you do not know about the part that is not covered by the probability. You may need to appeal to Slutsky's theorem (or something similar).



          To elaborate, suppose $|Z| = o_P(1)$. Then even a simple calculation shows begin{align} E|Z| &= int_{|Z| < varepsilon}|Z| dP + int_{|Z| geq varepsilon} |Z|dP\
          &leq varepsilon P(|Z| < varepsilon) + int_{|Z| geq varepsilon}|Z|dP.
          end{align}

          Even though the first term above is tending to one, knowing $|Z| = o_P(1)$ is not enough to show that the expectation also tends to zero (although it still may under some regularity conditions).






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            What do you mean with your first sentence, exactly? I don't get it.
            $endgroup$
            – Clement C.
            Jan 16 at 17:09










          • $begingroup$
            If $Psi: mathbb{R} to mathbb{R}$ is twice continuously differentiable, then no matter what values of $X$ are realized, $Psi$ will still be differentiable. You can always do a Taylor expansion, you just need to check that $X$ falls into the open interval in which you are doing such an expansion with high probability.
            $endgroup$
            – OldGodzilla
            Jan 16 at 17:17










          • $begingroup$
            But my question (in case it wasn't clear, sorry) is about under what assumptions I can integrate this $o_p$ or $O_p$. I was not worried about the expansion itself, but about what happens to it when taking its expectation.
            $endgroup$
            – Clement C.
            Jan 16 at 17:21










          • $begingroup$
            See my example above. In general, you can't (convergence in probability does not imply convergence in expectation). If you are trying to show convergence in distribution, then it is enough to show convergence in probability and appeal to Slutsky's theorem (for example).
            $endgroup$
            – OldGodzilla
            Jan 16 at 18:18










          • $begingroup$
            Thank you. ${}{}{}$
            $endgroup$
            – Clement C.
            Jan 20 at 3:31
















          1





          +50







          $begingroup$

          If $Psi$ is second-order differentiable, then it makes no difference. However, if $Psi$ is a random function, then you may have more issues. For example, if $Psi(X) = 1$ for almost all realizations of $X$, then there is no harm in doing a Taylor expansion.



          However, knowing that $alpha cdot langle X', Y rangle = o_P(1)$ may not be enough to calculate the expectation. I would assume you want to show something like the fact that the expectation tends to zero, but knowing probability bounds is not quite enough. Essentially, while your expression is $o_P(1)$, you do not know about the part that is not covered by the probability. You may need to appeal to Slutsky's theorem (or something similar).



          To elaborate, suppose $|Z| = o_P(1)$. Then even a simple calculation shows begin{align} E|Z| &= int_{|Z| < varepsilon}|Z| dP + int_{|Z| geq varepsilon} |Z|dP\
          &leq varepsilon P(|Z| < varepsilon) + int_{|Z| geq varepsilon}|Z|dP.
          end{align}

          Even though the first term above is tending to one, knowing $|Z| = o_P(1)$ is not enough to show that the expectation also tends to zero (although it still may under some regularity conditions).






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            What do you mean with your first sentence, exactly? I don't get it.
            $endgroup$
            – Clement C.
            Jan 16 at 17:09










          • $begingroup$
            If $Psi: mathbb{R} to mathbb{R}$ is twice continuously differentiable, then no matter what values of $X$ are realized, $Psi$ will still be differentiable. You can always do a Taylor expansion, you just need to check that $X$ falls into the open interval in which you are doing such an expansion with high probability.
            $endgroup$
            – OldGodzilla
            Jan 16 at 17:17










          • $begingroup$
            But my question (in case it wasn't clear, sorry) is about under what assumptions I can integrate this $o_p$ or $O_p$. I was not worried about the expansion itself, but about what happens to it when taking its expectation.
            $endgroup$
            – Clement C.
            Jan 16 at 17:21










          • $begingroup$
            See my example above. In general, you can't (convergence in probability does not imply convergence in expectation). If you are trying to show convergence in distribution, then it is enough to show convergence in probability and appeal to Slutsky's theorem (for example).
            $endgroup$
            – OldGodzilla
            Jan 16 at 18:18










          • $begingroup$
            Thank you. ${}{}{}$
            $endgroup$
            – Clement C.
            Jan 20 at 3:31














          1





          +50







          1





          +50



          1




          +50



          $begingroup$

          If $Psi$ is second-order differentiable, then it makes no difference. However, if $Psi$ is a random function, then you may have more issues. For example, if $Psi(X) = 1$ for almost all realizations of $X$, then there is no harm in doing a Taylor expansion.



          However, knowing that $alpha cdot langle X', Y rangle = o_P(1)$ may not be enough to calculate the expectation. I would assume you want to show something like the fact that the expectation tends to zero, but knowing probability bounds is not quite enough. Essentially, while your expression is $o_P(1)$, you do not know about the part that is not covered by the probability. You may need to appeal to Slutsky's theorem (or something similar).



          To elaborate, suppose $|Z| = o_P(1)$. Then even a simple calculation shows begin{align} E|Z| &= int_{|Z| < varepsilon}|Z| dP + int_{|Z| geq varepsilon} |Z|dP\
          &leq varepsilon P(|Z| < varepsilon) + int_{|Z| geq varepsilon}|Z|dP.
          end{align}

          Even though the first term above is tending to one, knowing $|Z| = o_P(1)$ is not enough to show that the expectation also tends to zero (although it still may under some regularity conditions).






          share|cite|improve this answer









          $endgroup$



          If $Psi$ is second-order differentiable, then it makes no difference. However, if $Psi$ is a random function, then you may have more issues. For example, if $Psi(X) = 1$ for almost all realizations of $X$, then there is no harm in doing a Taylor expansion.



          However, knowing that $alpha cdot langle X', Y rangle = o_P(1)$ may not be enough to calculate the expectation. I would assume you want to show something like the fact that the expectation tends to zero, but knowing probability bounds is not quite enough. Essentially, while your expression is $o_P(1)$, you do not know about the part that is not covered by the probability. You may need to appeal to Slutsky's theorem (or something similar).



          To elaborate, suppose $|Z| = o_P(1)$. Then even a simple calculation shows begin{align} E|Z| &= int_{|Z| < varepsilon}|Z| dP + int_{|Z| geq varepsilon} |Z|dP\
          &leq varepsilon P(|Z| < varepsilon) + int_{|Z| geq varepsilon}|Z|dP.
          end{align}

          Even though the first term above is tending to one, knowing $|Z| = o_P(1)$ is not enough to show that the expectation also tends to zero (although it still may under some regularity conditions).







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jan 15 at 17:39









          OldGodzillaOldGodzilla

          58225




          58225












          • $begingroup$
            What do you mean with your first sentence, exactly? I don't get it.
            $endgroup$
            – Clement C.
            Jan 16 at 17:09










          • $begingroup$
            If $Psi: mathbb{R} to mathbb{R}$ is twice continuously differentiable, then no matter what values of $X$ are realized, $Psi$ will still be differentiable. You can always do a Taylor expansion, you just need to check that $X$ falls into the open interval in which you are doing such an expansion with high probability.
            $endgroup$
            – OldGodzilla
            Jan 16 at 17:17










          • $begingroup$
            But my question (in case it wasn't clear, sorry) is about under what assumptions I can integrate this $o_p$ or $O_p$. I was not worried about the expansion itself, but about what happens to it when taking its expectation.
            $endgroup$
            – Clement C.
            Jan 16 at 17:21










          • $begingroup$
            See my example above. In general, you can't (convergence in probability does not imply convergence in expectation). If you are trying to show convergence in distribution, then it is enough to show convergence in probability and appeal to Slutsky's theorem (for example).
            $endgroup$
            – OldGodzilla
            Jan 16 at 18:18










          • $begingroup$
            Thank you. ${}{}{}$
            $endgroup$
            – Clement C.
            Jan 20 at 3:31


















          • $begingroup$
            What do you mean with your first sentence, exactly? I don't get it.
            $endgroup$
            – Clement C.
            Jan 16 at 17:09










          • $begingroup$
            If $Psi: mathbb{R} to mathbb{R}$ is twice continuously differentiable, then no matter what values of $X$ are realized, $Psi$ will still be differentiable. You can always do a Taylor expansion, you just need to check that $X$ falls into the open interval in which you are doing such an expansion with high probability.
            $endgroup$
            – OldGodzilla
            Jan 16 at 17:17










          • $begingroup$
            But my question (in case it wasn't clear, sorry) is about under what assumptions I can integrate this $o_p$ or $O_p$. I was not worried about the expansion itself, but about what happens to it when taking its expectation.
            $endgroup$
            – Clement C.
            Jan 16 at 17:21










          • $begingroup$
            See my example above. In general, you can't (convergence in probability does not imply convergence in expectation). If you are trying to show convergence in distribution, then it is enough to show convergence in probability and appeal to Slutsky's theorem (for example).
            $endgroup$
            – OldGodzilla
            Jan 16 at 18:18










          • $begingroup$
            Thank you. ${}{}{}$
            $endgroup$
            – Clement C.
            Jan 20 at 3:31
















          $begingroup$
          What do you mean with your first sentence, exactly? I don't get it.
          $endgroup$
          – Clement C.
          Jan 16 at 17:09




          $begingroup$
          What do you mean with your first sentence, exactly? I don't get it.
          $endgroup$
          – Clement C.
          Jan 16 at 17:09












          $begingroup$
          If $Psi: mathbb{R} to mathbb{R}$ is twice continuously differentiable, then no matter what values of $X$ are realized, $Psi$ will still be differentiable. You can always do a Taylor expansion, you just need to check that $X$ falls into the open interval in which you are doing such an expansion with high probability.
          $endgroup$
          – OldGodzilla
          Jan 16 at 17:17




          $begingroup$
          If $Psi: mathbb{R} to mathbb{R}$ is twice continuously differentiable, then no matter what values of $X$ are realized, $Psi$ will still be differentiable. You can always do a Taylor expansion, you just need to check that $X$ falls into the open interval in which you are doing such an expansion with high probability.
          $endgroup$
          – OldGodzilla
          Jan 16 at 17:17












          $begingroup$
          But my question (in case it wasn't clear, sorry) is about under what assumptions I can integrate this $o_p$ or $O_p$. I was not worried about the expansion itself, but about what happens to it when taking its expectation.
          $endgroup$
          – Clement C.
          Jan 16 at 17:21




          $begingroup$
          But my question (in case it wasn't clear, sorry) is about under what assumptions I can integrate this $o_p$ or $O_p$. I was not worried about the expansion itself, but about what happens to it when taking its expectation.
          $endgroup$
          – Clement C.
          Jan 16 at 17:21












          $begingroup$
          See my example above. In general, you can't (convergence in probability does not imply convergence in expectation). If you are trying to show convergence in distribution, then it is enough to show convergence in probability and appeal to Slutsky's theorem (for example).
          $endgroup$
          – OldGodzilla
          Jan 16 at 18:18




          $begingroup$
          See my example above. In general, you can't (convergence in probability does not imply convergence in expectation). If you are trying to show convergence in distribution, then it is enough to show convergence in probability and appeal to Slutsky's theorem (for example).
          $endgroup$
          – OldGodzilla
          Jan 16 at 18:18












          $begingroup$
          Thank you. ${}{}{}$
          $endgroup$
          – Clement C.
          Jan 20 at 3:31




          $begingroup$
          Thank you. ${}{}{}$
          $endgroup$
          – Clement C.
          Jan 20 at 3:31


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3067392%2fo-p-cdot-and-linearization-taylor-under-an-expectation%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          MongoDB - Not Authorized To Execute Command

          Npm cannot find a required file even through it is in the searched directory

          in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith