Derivaton of Chernofff's bound on Bayes error for Multivariate Gaussian distribution












0












$begingroup$


I was following derivation of Chernoffs bound for Bayes error Given in the book Pattern recognition by Duda Hart and Stork. However there is a minor difference between the results in book and details i have worked out. Please guid me where I am wrong.



We know that Bayes probabilities for classifying $x$ in two classes $omega_1$ and $omega_2$ are given by $P(omega_1)p(x|omega_1)$ and $P(omega_2)p(x|omega_2)$. So Bayes error is $int_x min(P(omega_1)p(x|omega_1), P(omega_2)p(x|omega_2)) dx $.



We note that $min(a,b) le a^beta b^{(1-beta)}$ for $a, b>0$, so Bayes error is bounded by $P(omega_1)^beta P(omega_2)^ {1-beta}int_x p(x|omega_1)^beta p(x|omega_2)^{1-beta} dx$.



We consider a case where class conditional probabilities are Normal $p(x|omega_1)$ is given by $mathcal N(mu_1, Sigma_1)$ and $p(x|omega_2)$ is given by $mathcal N(mu_2, Sigma_2)$. then Bayes error is bounded by



$P(omega_1)^beta P(omega_2)^ {1-beta}int_x mathcal N(mu_1, Sigma_1)^beta mathcal N(mu_2, Sigma_2)^{1-beta} dx$.



$int_x mathcal N(mu_1, Sigma_1)^beta mathcal N(mu_2, Sigma_2)^{1-beta} dx$



$=frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_xexp^{-1/2L} dx$ where $L =(x-mu_1)^tbeta Sigma_1^{-1}(x-mu_1)+ (x-mu_2)^t(1-beta) Sigma_2^{-1}(x-mu_12)$



Let $A= betaSigma_1^{-1}$ , $B= (1-beta)Sigma_2^{-1}$, $y=(x-mu_1)$ and $a=mu_2-mu_1$



Then $L=y^tAy+(y-a)^tB(y-a)$



$=y^t(A+B)y -a^tBy -y^tBa+a^tBa$



$=y^t(A+B)y-2a^tBy+a^tBa$ , as B is symmetric



As (A+B) is positive definite matrix it can be composed into a matrix $P$ such that $P^tP=(A+B)$



$L= (Py)^t(Py) -2a^tB(A+B)^{-1}P^tPy +a^tBa$



Note that $(a^tB(A+B)^{-1}P^t)(a^tB(A+B)^{-1}P^t)^t= a^tB(A+B)^{-1}P^tP(A+B)^{-1}Ba = a^tB(A+B)^{-1}Ba$
Also



$L= (Py)^t(Py) -2a^tB(A+B)^{-1}P^tPy + a^tB(A+B)^{-1}Ba - a^tB(A+B)^{-1}Ba +a^tBa$



Note that $-a^tB(A+B)^{-1}Ba +a^tBa= a^t[B-B(A+B)^{-1}B]a= a^t[B(A+B)^{-1}(A+B)-B(A+B)^{-1}B]a= a^tB(A+B)^{-1}Aa$



Let $z=y-(A+B)^{-1}Ba=x-mu_1-(A+B)^{-1}Ba$



$L= (Pz)^tPz+ a^tB(A+B)^{-1}Aa$



$= z^t(A+B)z+ a^tB(A+B)^{-1}Aa$



So,



$frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_xexp^{-1/2L} dx=
exp^{-frac{1}{2}a^tB(A+B)^{-1}Aa}frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_zexp^{-frac{1}{2}z^t(A+B)z} dz $



Note that $(A+B)=A(A^{-1}+B^{-1})B$



So $(A+B)^{-1}=B^{-1}(A^{-1}+B^{-1})^{-1}A^{-1}$



Also it is known that $int_zexp^{-frac{1}{2}z^t(A+B)z} dz = sqrt{(2 pi)^d |(A+B)^{-1}|} $



So $exp^{-1/2L}$ is given by $exp^{-frac{1}{2}a^t(A^{-1}+B^{-1})^{-1}a}sqrt{frac {|(A+B)^{-1}|}{ |Sigma_1|^beta|Sigma_2|^{(1-beta)}} }$



Expressing $exp^{-1/2L}$ as $exp^{-k}$



$k= frac{1}{2}a^t(A^{-1}+B^{-1})^{-1}a - frac{1}{2} log frac {|(A+B)^{-1}|}{ |Sigma_1|^beta|Sigma_2|^{(1-beta)}} $



$A^{-1}+B^{-1}= frac{Sigma_1}{beta} + frac{Sigma_2}{1-beta}= frac{(1-beta) Sigma_1 + beta Sigma_2}{beta(1-beta)} $



$|(A+B)^{-1}|=|B^{-1}(A^{-1}+B^{-1})^{-1}A^{-1}|= frac{|Sigma_1||Sigma_2|}{|(1-beta) Sigma_1 + beta Sigma_2|}$



So $k= frac{beta(1-beta)}{2}a^t[(1-beta)Sigma_1 + beta Sigma_2]^{-1}a - frac{1}{2} log frac{|Sigma_1|^{1-beta}|Sigma_2|^beta}{|(1-beta) Sigma_1 + beta Sigma_2|} $



In the book $k= frac{beta(1-beta)}{2}a^t[(1-beta)Sigma_2 + beta Sigma_1]^{-1}a - frac{1}{2} log frac{|Sigma_2|^{1-beta}|Sigma_1|^beta}{|(1-beta) Sigma_2 + beta Sigma_1|} $. But from the diagrams function does not seem be be symmetric on $beta$. Please let me know where I have gone wrong










share|cite|improve this question









$endgroup$

















    0












    $begingroup$


    I was following derivation of Chernoffs bound for Bayes error Given in the book Pattern recognition by Duda Hart and Stork. However there is a minor difference between the results in book and details i have worked out. Please guid me where I am wrong.



    We know that Bayes probabilities for classifying $x$ in two classes $omega_1$ and $omega_2$ are given by $P(omega_1)p(x|omega_1)$ and $P(omega_2)p(x|omega_2)$. So Bayes error is $int_x min(P(omega_1)p(x|omega_1), P(omega_2)p(x|omega_2)) dx $.



    We note that $min(a,b) le a^beta b^{(1-beta)}$ for $a, b>0$, so Bayes error is bounded by $P(omega_1)^beta P(omega_2)^ {1-beta}int_x p(x|omega_1)^beta p(x|omega_2)^{1-beta} dx$.



    We consider a case where class conditional probabilities are Normal $p(x|omega_1)$ is given by $mathcal N(mu_1, Sigma_1)$ and $p(x|omega_2)$ is given by $mathcal N(mu_2, Sigma_2)$. then Bayes error is bounded by



    $P(omega_1)^beta P(omega_2)^ {1-beta}int_x mathcal N(mu_1, Sigma_1)^beta mathcal N(mu_2, Sigma_2)^{1-beta} dx$.



    $int_x mathcal N(mu_1, Sigma_1)^beta mathcal N(mu_2, Sigma_2)^{1-beta} dx$



    $=frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_xexp^{-1/2L} dx$ where $L =(x-mu_1)^tbeta Sigma_1^{-1}(x-mu_1)+ (x-mu_2)^t(1-beta) Sigma_2^{-1}(x-mu_12)$



    Let $A= betaSigma_1^{-1}$ , $B= (1-beta)Sigma_2^{-1}$, $y=(x-mu_1)$ and $a=mu_2-mu_1$



    Then $L=y^tAy+(y-a)^tB(y-a)$



    $=y^t(A+B)y -a^tBy -y^tBa+a^tBa$



    $=y^t(A+B)y-2a^tBy+a^tBa$ , as B is symmetric



    As (A+B) is positive definite matrix it can be composed into a matrix $P$ such that $P^tP=(A+B)$



    $L= (Py)^t(Py) -2a^tB(A+B)^{-1}P^tPy +a^tBa$



    Note that $(a^tB(A+B)^{-1}P^t)(a^tB(A+B)^{-1}P^t)^t= a^tB(A+B)^{-1}P^tP(A+B)^{-1}Ba = a^tB(A+B)^{-1}Ba$
    Also



    $L= (Py)^t(Py) -2a^tB(A+B)^{-1}P^tPy + a^tB(A+B)^{-1}Ba - a^tB(A+B)^{-1}Ba +a^tBa$



    Note that $-a^tB(A+B)^{-1}Ba +a^tBa= a^t[B-B(A+B)^{-1}B]a= a^t[B(A+B)^{-1}(A+B)-B(A+B)^{-1}B]a= a^tB(A+B)^{-1}Aa$



    Let $z=y-(A+B)^{-1}Ba=x-mu_1-(A+B)^{-1}Ba$



    $L= (Pz)^tPz+ a^tB(A+B)^{-1}Aa$



    $= z^t(A+B)z+ a^tB(A+B)^{-1}Aa$



    So,



    $frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_xexp^{-1/2L} dx=
    exp^{-frac{1}{2}a^tB(A+B)^{-1}Aa}frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_zexp^{-frac{1}{2}z^t(A+B)z} dz $



    Note that $(A+B)=A(A^{-1}+B^{-1})B$



    So $(A+B)^{-1}=B^{-1}(A^{-1}+B^{-1})^{-1}A^{-1}$



    Also it is known that $int_zexp^{-frac{1}{2}z^t(A+B)z} dz = sqrt{(2 pi)^d |(A+B)^{-1}|} $



    So $exp^{-1/2L}$ is given by $exp^{-frac{1}{2}a^t(A^{-1}+B^{-1})^{-1}a}sqrt{frac {|(A+B)^{-1}|}{ |Sigma_1|^beta|Sigma_2|^{(1-beta)}} }$



    Expressing $exp^{-1/2L}$ as $exp^{-k}$



    $k= frac{1}{2}a^t(A^{-1}+B^{-1})^{-1}a - frac{1}{2} log frac {|(A+B)^{-1}|}{ |Sigma_1|^beta|Sigma_2|^{(1-beta)}} $



    $A^{-1}+B^{-1}= frac{Sigma_1}{beta} + frac{Sigma_2}{1-beta}= frac{(1-beta) Sigma_1 + beta Sigma_2}{beta(1-beta)} $



    $|(A+B)^{-1}|=|B^{-1}(A^{-1}+B^{-1})^{-1}A^{-1}|= frac{|Sigma_1||Sigma_2|}{|(1-beta) Sigma_1 + beta Sigma_2|}$



    So $k= frac{beta(1-beta)}{2}a^t[(1-beta)Sigma_1 + beta Sigma_2]^{-1}a - frac{1}{2} log frac{|Sigma_1|^{1-beta}|Sigma_2|^beta}{|(1-beta) Sigma_1 + beta Sigma_2|} $



    In the book $k= frac{beta(1-beta)}{2}a^t[(1-beta)Sigma_2 + beta Sigma_1]^{-1}a - frac{1}{2} log frac{|Sigma_2|^{1-beta}|Sigma_1|^beta}{|(1-beta) Sigma_2 + beta Sigma_1|} $. But from the diagrams function does not seem be be symmetric on $beta$. Please let me know where I have gone wrong










    share|cite|improve this question









    $endgroup$















      0












      0








      0





      $begingroup$


      I was following derivation of Chernoffs bound for Bayes error Given in the book Pattern recognition by Duda Hart and Stork. However there is a minor difference between the results in book and details i have worked out. Please guid me where I am wrong.



      We know that Bayes probabilities for classifying $x$ in two classes $omega_1$ and $omega_2$ are given by $P(omega_1)p(x|omega_1)$ and $P(omega_2)p(x|omega_2)$. So Bayes error is $int_x min(P(omega_1)p(x|omega_1), P(omega_2)p(x|omega_2)) dx $.



      We note that $min(a,b) le a^beta b^{(1-beta)}$ for $a, b>0$, so Bayes error is bounded by $P(omega_1)^beta P(omega_2)^ {1-beta}int_x p(x|omega_1)^beta p(x|omega_2)^{1-beta} dx$.



      We consider a case where class conditional probabilities are Normal $p(x|omega_1)$ is given by $mathcal N(mu_1, Sigma_1)$ and $p(x|omega_2)$ is given by $mathcal N(mu_2, Sigma_2)$. then Bayes error is bounded by



      $P(omega_1)^beta P(omega_2)^ {1-beta}int_x mathcal N(mu_1, Sigma_1)^beta mathcal N(mu_2, Sigma_2)^{1-beta} dx$.



      $int_x mathcal N(mu_1, Sigma_1)^beta mathcal N(mu_2, Sigma_2)^{1-beta} dx$



      $=frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_xexp^{-1/2L} dx$ where $L =(x-mu_1)^tbeta Sigma_1^{-1}(x-mu_1)+ (x-mu_2)^t(1-beta) Sigma_2^{-1}(x-mu_12)$



      Let $A= betaSigma_1^{-1}$ , $B= (1-beta)Sigma_2^{-1}$, $y=(x-mu_1)$ and $a=mu_2-mu_1$



      Then $L=y^tAy+(y-a)^tB(y-a)$



      $=y^t(A+B)y -a^tBy -y^tBa+a^tBa$



      $=y^t(A+B)y-2a^tBy+a^tBa$ , as B is symmetric



      As (A+B) is positive definite matrix it can be composed into a matrix $P$ such that $P^tP=(A+B)$



      $L= (Py)^t(Py) -2a^tB(A+B)^{-1}P^tPy +a^tBa$



      Note that $(a^tB(A+B)^{-1}P^t)(a^tB(A+B)^{-1}P^t)^t= a^tB(A+B)^{-1}P^tP(A+B)^{-1}Ba = a^tB(A+B)^{-1}Ba$
      Also



      $L= (Py)^t(Py) -2a^tB(A+B)^{-1}P^tPy + a^tB(A+B)^{-1}Ba - a^tB(A+B)^{-1}Ba +a^tBa$



      Note that $-a^tB(A+B)^{-1}Ba +a^tBa= a^t[B-B(A+B)^{-1}B]a= a^t[B(A+B)^{-1}(A+B)-B(A+B)^{-1}B]a= a^tB(A+B)^{-1}Aa$



      Let $z=y-(A+B)^{-1}Ba=x-mu_1-(A+B)^{-1}Ba$



      $L= (Pz)^tPz+ a^tB(A+B)^{-1}Aa$



      $= z^t(A+B)z+ a^tB(A+B)^{-1}Aa$



      So,



      $frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_xexp^{-1/2L} dx=
      exp^{-frac{1}{2}a^tB(A+B)^{-1}Aa}frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_zexp^{-frac{1}{2}z^t(A+B)z} dz $



      Note that $(A+B)=A(A^{-1}+B^{-1})B$



      So $(A+B)^{-1}=B^{-1}(A^{-1}+B^{-1})^{-1}A^{-1}$



      Also it is known that $int_zexp^{-frac{1}{2}z^t(A+B)z} dz = sqrt{(2 pi)^d |(A+B)^{-1}|} $



      So $exp^{-1/2L}$ is given by $exp^{-frac{1}{2}a^t(A^{-1}+B^{-1})^{-1}a}sqrt{frac {|(A+B)^{-1}|}{ |Sigma_1|^beta|Sigma_2|^{(1-beta)}} }$



      Expressing $exp^{-1/2L}$ as $exp^{-k}$



      $k= frac{1}{2}a^t(A^{-1}+B^{-1})^{-1}a - frac{1}{2} log frac {|(A+B)^{-1}|}{ |Sigma_1|^beta|Sigma_2|^{(1-beta)}} $



      $A^{-1}+B^{-1}= frac{Sigma_1}{beta} + frac{Sigma_2}{1-beta}= frac{(1-beta) Sigma_1 + beta Sigma_2}{beta(1-beta)} $



      $|(A+B)^{-1}|=|B^{-1}(A^{-1}+B^{-1})^{-1}A^{-1}|= frac{|Sigma_1||Sigma_2|}{|(1-beta) Sigma_1 + beta Sigma_2|}$



      So $k= frac{beta(1-beta)}{2}a^t[(1-beta)Sigma_1 + beta Sigma_2]^{-1}a - frac{1}{2} log frac{|Sigma_1|^{1-beta}|Sigma_2|^beta}{|(1-beta) Sigma_1 + beta Sigma_2|} $



      In the book $k= frac{beta(1-beta)}{2}a^t[(1-beta)Sigma_2 + beta Sigma_1]^{-1}a - frac{1}{2} log frac{|Sigma_2|^{1-beta}|Sigma_1|^beta}{|(1-beta) Sigma_2 + beta Sigma_1|} $. But from the diagrams function does not seem be be symmetric on $beta$. Please let me know where I have gone wrong










      share|cite|improve this question









      $endgroup$




      I was following derivation of Chernoffs bound for Bayes error Given in the book Pattern recognition by Duda Hart and Stork. However there is a minor difference between the results in book and details i have worked out. Please guid me where I am wrong.



      We know that Bayes probabilities for classifying $x$ in two classes $omega_1$ and $omega_2$ are given by $P(omega_1)p(x|omega_1)$ and $P(omega_2)p(x|omega_2)$. So Bayes error is $int_x min(P(omega_1)p(x|omega_1), P(omega_2)p(x|omega_2)) dx $.



      We note that $min(a,b) le a^beta b^{(1-beta)}$ for $a, b>0$, so Bayes error is bounded by $P(omega_1)^beta P(omega_2)^ {1-beta}int_x p(x|omega_1)^beta p(x|omega_2)^{1-beta} dx$.



      We consider a case where class conditional probabilities are Normal $p(x|omega_1)$ is given by $mathcal N(mu_1, Sigma_1)$ and $p(x|omega_2)$ is given by $mathcal N(mu_2, Sigma_2)$. then Bayes error is bounded by



      $P(omega_1)^beta P(omega_2)^ {1-beta}int_x mathcal N(mu_1, Sigma_1)^beta mathcal N(mu_2, Sigma_2)^{1-beta} dx$.



      $int_x mathcal N(mu_1, Sigma_1)^beta mathcal N(mu_2, Sigma_2)^{1-beta} dx$



      $=frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_xexp^{-1/2L} dx$ where $L =(x-mu_1)^tbeta Sigma_1^{-1}(x-mu_1)+ (x-mu_2)^t(1-beta) Sigma_2^{-1}(x-mu_12)$



      Let $A= betaSigma_1^{-1}$ , $B= (1-beta)Sigma_2^{-1}$, $y=(x-mu_1)$ and $a=mu_2-mu_1$



      Then $L=y^tAy+(y-a)^tB(y-a)$



      $=y^t(A+B)y -a^tBy -y^tBa+a^tBa$



      $=y^t(A+B)y-2a^tBy+a^tBa$ , as B is symmetric



      As (A+B) is positive definite matrix it can be composed into a matrix $P$ such that $P^tP=(A+B)$



      $L= (Py)^t(Py) -2a^tB(A+B)^{-1}P^tPy +a^tBa$



      Note that $(a^tB(A+B)^{-1}P^t)(a^tB(A+B)^{-1}P^t)^t= a^tB(A+B)^{-1}P^tP(A+B)^{-1}Ba = a^tB(A+B)^{-1}Ba$
      Also



      $L= (Py)^t(Py) -2a^tB(A+B)^{-1}P^tPy + a^tB(A+B)^{-1}Ba - a^tB(A+B)^{-1}Ba +a^tBa$



      Note that $-a^tB(A+B)^{-1}Ba +a^tBa= a^t[B-B(A+B)^{-1}B]a= a^t[B(A+B)^{-1}(A+B)-B(A+B)^{-1}B]a= a^tB(A+B)^{-1}Aa$



      Let $z=y-(A+B)^{-1}Ba=x-mu_1-(A+B)^{-1}Ba$



      $L= (Pz)^tPz+ a^tB(A+B)^{-1}Aa$



      $= z^t(A+B)z+ a^tB(A+B)^{-1}Aa$



      So,



      $frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_xexp^{-1/2L} dx=
      exp^{-frac{1}{2}a^tB(A+B)^{-1}Aa}frac {1}{sqrt{(2 pi)^d |Sigma_1|^beta|Sigma_2|^{(1-beta)}}}int_zexp^{-frac{1}{2}z^t(A+B)z} dz $



      Note that $(A+B)=A(A^{-1}+B^{-1})B$



      So $(A+B)^{-1}=B^{-1}(A^{-1}+B^{-1})^{-1}A^{-1}$



      Also it is known that $int_zexp^{-frac{1}{2}z^t(A+B)z} dz = sqrt{(2 pi)^d |(A+B)^{-1}|} $



      So $exp^{-1/2L}$ is given by $exp^{-frac{1}{2}a^t(A^{-1}+B^{-1})^{-1}a}sqrt{frac {|(A+B)^{-1}|}{ |Sigma_1|^beta|Sigma_2|^{(1-beta)}} }$



      Expressing $exp^{-1/2L}$ as $exp^{-k}$



      $k= frac{1}{2}a^t(A^{-1}+B^{-1})^{-1}a - frac{1}{2} log frac {|(A+B)^{-1}|}{ |Sigma_1|^beta|Sigma_2|^{(1-beta)}} $



      $A^{-1}+B^{-1}= frac{Sigma_1}{beta} + frac{Sigma_2}{1-beta}= frac{(1-beta) Sigma_1 + beta Sigma_2}{beta(1-beta)} $



      $|(A+B)^{-1}|=|B^{-1}(A^{-1}+B^{-1})^{-1}A^{-1}|= frac{|Sigma_1||Sigma_2|}{|(1-beta) Sigma_1 + beta Sigma_2|}$



      So $k= frac{beta(1-beta)}{2}a^t[(1-beta)Sigma_1 + beta Sigma_2]^{-1}a - frac{1}{2} log frac{|Sigma_1|^{1-beta}|Sigma_2|^beta}{|(1-beta) Sigma_1 + beta Sigma_2|} $



      In the book $k= frac{beta(1-beta)}{2}a^t[(1-beta)Sigma_2 + beta Sigma_1]^{-1}a - frac{1}{2} log frac{|Sigma_2|^{1-beta}|Sigma_1|^beta}{|(1-beta) Sigma_2 + beta Sigma_1|} $. But from the diagrams function does not seem be be symmetric on $beta$. Please let me know where I have gone wrong







      probability bayesian






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Jan 11 at 7:46









      CuriousCurious

      889516




      889516






















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3069583%2fderivaton-of-chernofffs-bound-on-bayes-error-for-multivariate-gaussian-distribu%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3069583%2fderivaton-of-chernofffs-bound-on-bayes-error-for-multivariate-gaussian-distribu%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          MongoDB - Not Authorized To Execute Command

          in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

          Npm cannot find a required file even through it is in the searched directory