bounding min-entropy gain in differential privacy












0












$begingroup$


In privacy-related computer science literature, we say that a randomized algorithm $mathcal{K}$ that produces a model $theta$ from a sample $X=(x_1,...,x_n)$ is $epsilon$-differentially private iff



$$
forall theta, quad mathbb{P}[mathcal{K}(X)=theta | X=X_0] leq e^epsilon mathbb{P}[mathcal{K} (X)=theta|X=X_1] tag{1}
$$



where $X_0$ and $X_1$ are any Hamming-1 neighbors (differ by a single entry)



Using Bayes rule, we can rewrite (1) as a bounded Bayes ratio



$$
frac{mathbb{P}[X=X_0|theta]}{mathbb{P}[X=X_1|theta]} leq e^epsilon frac{mathbb{P}[X=X_0]}{mathbb{P}[X=X_1]} tag{2}
$$



In order to relate this to the Bayes (irreducible) error of an adversary trying to estimate $X$ from $theta$, one can leverage the min-entropy



$$
H_infty(X) = -log max_x mathbb{P}[X=x] \ text{and} \
H_infty(X|theta) = -log mathbb{E}_theta max_x mathbb{P}[X=x|theta]
$$



First part of the question : Can we translate (2) into a bound involving $H_infty(X|theta)$ and $H_infty(X)$ ?



Second part of the question : Since $H_infty(X|theta)$ takes the expectation over $theta$, it looks like a weaker guarantee than (1), (bounded leakage only holding in expectation, with some $theta$ leaking more than the bound). Is there a well-accepted entropy definition that keeps the same expressivity as (1), e.g.,



$$
H_text{bla}(X|theta) = -log max_{theta,x} mathbb{P}[X=x|theta]
$$



or something like that ?



thanks !










share|cite|improve this question









$endgroup$

















    0












    $begingroup$


    In privacy-related computer science literature, we say that a randomized algorithm $mathcal{K}$ that produces a model $theta$ from a sample $X=(x_1,...,x_n)$ is $epsilon$-differentially private iff



    $$
    forall theta, quad mathbb{P}[mathcal{K}(X)=theta | X=X_0] leq e^epsilon mathbb{P}[mathcal{K} (X)=theta|X=X_1] tag{1}
    $$



    where $X_0$ and $X_1$ are any Hamming-1 neighbors (differ by a single entry)



    Using Bayes rule, we can rewrite (1) as a bounded Bayes ratio



    $$
    frac{mathbb{P}[X=X_0|theta]}{mathbb{P}[X=X_1|theta]} leq e^epsilon frac{mathbb{P}[X=X_0]}{mathbb{P}[X=X_1]} tag{2}
    $$



    In order to relate this to the Bayes (irreducible) error of an adversary trying to estimate $X$ from $theta$, one can leverage the min-entropy



    $$
    H_infty(X) = -log max_x mathbb{P}[X=x] \ text{and} \
    H_infty(X|theta) = -log mathbb{E}_theta max_x mathbb{P}[X=x|theta]
    $$



    First part of the question : Can we translate (2) into a bound involving $H_infty(X|theta)$ and $H_infty(X)$ ?



    Second part of the question : Since $H_infty(X|theta)$ takes the expectation over $theta$, it looks like a weaker guarantee than (1), (bounded leakage only holding in expectation, with some $theta$ leaking more than the bound). Is there a well-accepted entropy definition that keeps the same expressivity as (1), e.g.,



    $$
    H_text{bla}(X|theta) = -log max_{theta,x} mathbb{P}[X=x|theta]
    $$



    or something like that ?



    thanks !










    share|cite|improve this question









    $endgroup$















      0












      0








      0





      $begingroup$


      In privacy-related computer science literature, we say that a randomized algorithm $mathcal{K}$ that produces a model $theta$ from a sample $X=(x_1,...,x_n)$ is $epsilon$-differentially private iff



      $$
      forall theta, quad mathbb{P}[mathcal{K}(X)=theta | X=X_0] leq e^epsilon mathbb{P}[mathcal{K} (X)=theta|X=X_1] tag{1}
      $$



      where $X_0$ and $X_1$ are any Hamming-1 neighbors (differ by a single entry)



      Using Bayes rule, we can rewrite (1) as a bounded Bayes ratio



      $$
      frac{mathbb{P}[X=X_0|theta]}{mathbb{P}[X=X_1|theta]} leq e^epsilon frac{mathbb{P}[X=X_0]}{mathbb{P}[X=X_1]} tag{2}
      $$



      In order to relate this to the Bayes (irreducible) error of an adversary trying to estimate $X$ from $theta$, one can leverage the min-entropy



      $$
      H_infty(X) = -log max_x mathbb{P}[X=x] \ text{and} \
      H_infty(X|theta) = -log mathbb{E}_theta max_x mathbb{P}[X=x|theta]
      $$



      First part of the question : Can we translate (2) into a bound involving $H_infty(X|theta)$ and $H_infty(X)$ ?



      Second part of the question : Since $H_infty(X|theta)$ takes the expectation over $theta$, it looks like a weaker guarantee than (1), (bounded leakage only holding in expectation, with some $theta$ leaking more than the bound). Is there a well-accepted entropy definition that keeps the same expressivity as (1), e.g.,



      $$
      H_text{bla}(X|theta) = -log max_{theta,x} mathbb{P}[X=x|theta]
      $$



      or something like that ?



      thanks !










      share|cite|improve this question









      $endgroup$




      In privacy-related computer science literature, we say that a randomized algorithm $mathcal{K}$ that produces a model $theta$ from a sample $X=(x_1,...,x_n)$ is $epsilon$-differentially private iff



      $$
      forall theta, quad mathbb{P}[mathcal{K}(X)=theta | X=X_0] leq e^epsilon mathbb{P}[mathcal{K} (X)=theta|X=X_1] tag{1}
      $$



      where $X_0$ and $X_1$ are any Hamming-1 neighbors (differ by a single entry)



      Using Bayes rule, we can rewrite (1) as a bounded Bayes ratio



      $$
      frac{mathbb{P}[X=X_0|theta]}{mathbb{P}[X=X_1|theta]} leq e^epsilon frac{mathbb{P}[X=X_0]}{mathbb{P}[X=X_1]} tag{2}
      $$



      In order to relate this to the Bayes (irreducible) error of an adversary trying to estimate $X$ from $theta$, one can leverage the min-entropy



      $$
      H_infty(X) = -log max_x mathbb{P}[X=x] \ text{and} \
      H_infty(X|theta) = -log mathbb{E}_theta max_x mathbb{P}[X=x|theta]
      $$



      First part of the question : Can we translate (2) into a bound involving $H_infty(X|theta)$ and $H_infty(X)$ ?



      Second part of the question : Since $H_infty(X|theta)$ takes the expectation over $theta$, it looks like a weaker guarantee than (1), (bounded leakage only holding in expectation, with some $theta$ leaking more than the bound). Is there a well-accepted entropy definition that keeps the same expressivity as (1), e.g.,



      $$
      H_text{bla}(X|theta) = -log max_{theta,x} mathbb{P}[X=x|theta]
      $$



      or something like that ?



      thanks !







      entropy ratio log-likelihood






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Jan 16 at 11:19









      Jerome FJerome F

      254




      254






















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3075612%2fbounding-min-entropy-gain-in-differential-privacy%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3075612%2fbounding-min-entropy-gain-in-differential-privacy%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          MongoDB - Not Authorized To Execute Command

          in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

          How to fix TextFormField cause rebuild widget in Flutter