Proving that $text{Sym}(T)$ is a group without treating functions as sets












-1












$begingroup$


One of the points that Tim Gowers makes in this blog post, is that the set theoretic definition of functions achieves nothing, that it is possible to treat functions as fundamentally different objects from sets. However, I think this might raise an issue on the existence of functions.





Consider an example in group theory. Suppose we want to prove that for each set $T$, $text{Sym}(T)$, the set of all bijections on $T$ forms a group under function composition. We know that the composition of two bijections is a bijection, function composition is associative, we can take the identity element to be the identity function on $T$, and the inverse function of a function is bijective. Let's examine the identity axiom more closely:



The identity axiom requires the existence of an identity element. We need to somehow prove that for every set $T$, there exists a function $f: T to T$, such that for each $t in T$, $f(t) = t$.
If we treat functions as sets, we can do this. If $T$ is a set, it can be shown that $T times T$ exists, and is the unique set which consists of all ordered pairs of elements of $T$. Then, by the axiom of specification, the set $$ f = { (a,b) in T times T | a = b }$$ also exists. Once the existence of the identity function is established, it can be shown that it satisfies the remaining requirements of the identity axiom.



However, if we treat functions as fundamentally different from sets, it seems that we need to be careful when specifying axioms surrounding them. Statements such as:




  • A function is a rule which associates each element of one set with exactly one element of another

  • If $f: A to B$, then for each $a in A$, $f(a) in B$


do not seem sufficient to show that any function exists in the first place, so we would be left unable to prove that symmetric groups are actually groups!



At the end of his post, Gowers proposes the following way of defining functions:




Here in detail is what I would say. Let $G$ (to stand for “graph”) be a subset of $A times B$ such that for every $x in A$, there is exactly one $y in B$ with $(x,y) in G$. Then we can define a function $f: A to B$ in terms of $G$ by letting $f(x)$ be the unique $y$ such that $(x,y) in G$. Moreover, every function from $A$ to $B$ can be obtained this way, since $f$ is a function we can define $G$ to be the set of all $(x,y)$ such that $y = f(x)$.




Is this sufficient to solve the problem of asserting the existence of an identity function on each set? What if $A$ is empty? Then there is no unique $y$ such that $(x,y) in G$. Going backward seems fine, since if there is no $(x,y)$ such that $y = f(x)$, then the resulting $G$ would be empty. Am I missing something here?



In summary, my questions are:




  • Is the argument that for each set $T$, an identity function on $T$ exists, valid? That is, under the framework of defining functions as sets.

  • Is the method of defining functions that Tim Gowers proposes sufficient to deal with the problem of the existence of identity functions?










share|cite|improve this question











$endgroup$








  • 3




    $begingroup$
    The empty function is a function $emptyset to emptyset$. It's the only such function, so its symmetry group is the trivial group.
    $endgroup$
    – Randall
    Jan 22 at 21:01












  • $begingroup$
    In order to assert that, wouldn't we need to prove that the empty function is a bijection, and acts as an identity under function composition? I don't see the implication there.
    $endgroup$
    – E-mu
    Jan 22 at 21:05






  • 2




    $begingroup$
    It is, vacuously. Its inverse is itself. None of this is interesting, though.
    $endgroup$
    – Randall
    Jan 22 at 21:05








  • 1




    $begingroup$
    I do not know of any definition of functions between sets that doesn't refer to, well, sets.
    $endgroup$
    – Randall
    Jan 22 at 21:11






  • 1




    $begingroup$
    No, it's hopeless. You seem to only want to use the particular property of functions you said. Therefore, let me define, for sets $A,B$, the set of FUN-CTIONS from $A$ to $B$ as the empty set. Then it satisfies your property: If $f$ is a FUN-CTION from $A$ to $B$, blahblah. Indeed this is a universal quantifier on the empty set, blahblah does not matter. From this you won't derive that the set of FUN-CTIONS from $emptyset$ to $emptyset$ is nonempty, since it's indeed empty.
    $endgroup$
    – YCor
    Jan 22 at 23:30


















-1












$begingroup$


One of the points that Tim Gowers makes in this blog post, is that the set theoretic definition of functions achieves nothing, that it is possible to treat functions as fundamentally different objects from sets. However, I think this might raise an issue on the existence of functions.





Consider an example in group theory. Suppose we want to prove that for each set $T$, $text{Sym}(T)$, the set of all bijections on $T$ forms a group under function composition. We know that the composition of two bijections is a bijection, function composition is associative, we can take the identity element to be the identity function on $T$, and the inverse function of a function is bijective. Let's examine the identity axiom more closely:



The identity axiom requires the existence of an identity element. We need to somehow prove that for every set $T$, there exists a function $f: T to T$, such that for each $t in T$, $f(t) = t$.
If we treat functions as sets, we can do this. If $T$ is a set, it can be shown that $T times T$ exists, and is the unique set which consists of all ordered pairs of elements of $T$. Then, by the axiom of specification, the set $$ f = { (a,b) in T times T | a = b }$$ also exists. Once the existence of the identity function is established, it can be shown that it satisfies the remaining requirements of the identity axiom.



However, if we treat functions as fundamentally different from sets, it seems that we need to be careful when specifying axioms surrounding them. Statements such as:




  • A function is a rule which associates each element of one set with exactly one element of another

  • If $f: A to B$, then for each $a in A$, $f(a) in B$


do not seem sufficient to show that any function exists in the first place, so we would be left unable to prove that symmetric groups are actually groups!



At the end of his post, Gowers proposes the following way of defining functions:




Here in detail is what I would say. Let $G$ (to stand for “graph”) be a subset of $A times B$ such that for every $x in A$, there is exactly one $y in B$ with $(x,y) in G$. Then we can define a function $f: A to B$ in terms of $G$ by letting $f(x)$ be the unique $y$ such that $(x,y) in G$. Moreover, every function from $A$ to $B$ can be obtained this way, since $f$ is a function we can define $G$ to be the set of all $(x,y)$ such that $y = f(x)$.




Is this sufficient to solve the problem of asserting the existence of an identity function on each set? What if $A$ is empty? Then there is no unique $y$ such that $(x,y) in G$. Going backward seems fine, since if there is no $(x,y)$ such that $y = f(x)$, then the resulting $G$ would be empty. Am I missing something here?



In summary, my questions are:




  • Is the argument that for each set $T$, an identity function on $T$ exists, valid? That is, under the framework of defining functions as sets.

  • Is the method of defining functions that Tim Gowers proposes sufficient to deal with the problem of the existence of identity functions?










share|cite|improve this question











$endgroup$








  • 3




    $begingroup$
    The empty function is a function $emptyset to emptyset$. It's the only such function, so its symmetry group is the trivial group.
    $endgroup$
    – Randall
    Jan 22 at 21:01












  • $begingroup$
    In order to assert that, wouldn't we need to prove that the empty function is a bijection, and acts as an identity under function composition? I don't see the implication there.
    $endgroup$
    – E-mu
    Jan 22 at 21:05






  • 2




    $begingroup$
    It is, vacuously. Its inverse is itself. None of this is interesting, though.
    $endgroup$
    – Randall
    Jan 22 at 21:05








  • 1




    $begingroup$
    I do not know of any definition of functions between sets that doesn't refer to, well, sets.
    $endgroup$
    – Randall
    Jan 22 at 21:11






  • 1




    $begingroup$
    No, it's hopeless. You seem to only want to use the particular property of functions you said. Therefore, let me define, for sets $A,B$, the set of FUN-CTIONS from $A$ to $B$ as the empty set. Then it satisfies your property: If $f$ is a FUN-CTION from $A$ to $B$, blahblah. Indeed this is a universal quantifier on the empty set, blahblah does not matter. From this you won't derive that the set of FUN-CTIONS from $emptyset$ to $emptyset$ is nonempty, since it's indeed empty.
    $endgroup$
    – YCor
    Jan 22 at 23:30
















-1












-1








-1





$begingroup$


One of the points that Tim Gowers makes in this blog post, is that the set theoretic definition of functions achieves nothing, that it is possible to treat functions as fundamentally different objects from sets. However, I think this might raise an issue on the existence of functions.





Consider an example in group theory. Suppose we want to prove that for each set $T$, $text{Sym}(T)$, the set of all bijections on $T$ forms a group under function composition. We know that the composition of two bijections is a bijection, function composition is associative, we can take the identity element to be the identity function on $T$, and the inverse function of a function is bijective. Let's examine the identity axiom more closely:



The identity axiom requires the existence of an identity element. We need to somehow prove that for every set $T$, there exists a function $f: T to T$, such that for each $t in T$, $f(t) = t$.
If we treat functions as sets, we can do this. If $T$ is a set, it can be shown that $T times T$ exists, and is the unique set which consists of all ordered pairs of elements of $T$. Then, by the axiom of specification, the set $$ f = { (a,b) in T times T | a = b }$$ also exists. Once the existence of the identity function is established, it can be shown that it satisfies the remaining requirements of the identity axiom.



However, if we treat functions as fundamentally different from sets, it seems that we need to be careful when specifying axioms surrounding them. Statements such as:




  • A function is a rule which associates each element of one set with exactly one element of another

  • If $f: A to B$, then for each $a in A$, $f(a) in B$


do not seem sufficient to show that any function exists in the first place, so we would be left unable to prove that symmetric groups are actually groups!



At the end of his post, Gowers proposes the following way of defining functions:




Here in detail is what I would say. Let $G$ (to stand for “graph”) be a subset of $A times B$ such that for every $x in A$, there is exactly one $y in B$ with $(x,y) in G$. Then we can define a function $f: A to B$ in terms of $G$ by letting $f(x)$ be the unique $y$ such that $(x,y) in G$. Moreover, every function from $A$ to $B$ can be obtained this way, since $f$ is a function we can define $G$ to be the set of all $(x,y)$ such that $y = f(x)$.




Is this sufficient to solve the problem of asserting the existence of an identity function on each set? What if $A$ is empty? Then there is no unique $y$ such that $(x,y) in G$. Going backward seems fine, since if there is no $(x,y)$ such that $y = f(x)$, then the resulting $G$ would be empty. Am I missing something here?



In summary, my questions are:




  • Is the argument that for each set $T$, an identity function on $T$ exists, valid? That is, under the framework of defining functions as sets.

  • Is the method of defining functions that Tim Gowers proposes sufficient to deal with the problem of the existence of identity functions?










share|cite|improve this question











$endgroup$




One of the points that Tim Gowers makes in this blog post, is that the set theoretic definition of functions achieves nothing, that it is possible to treat functions as fundamentally different objects from sets. However, I think this might raise an issue on the existence of functions.





Consider an example in group theory. Suppose we want to prove that for each set $T$, $text{Sym}(T)$, the set of all bijections on $T$ forms a group under function composition. We know that the composition of two bijections is a bijection, function composition is associative, we can take the identity element to be the identity function on $T$, and the inverse function of a function is bijective. Let's examine the identity axiom more closely:



The identity axiom requires the existence of an identity element. We need to somehow prove that for every set $T$, there exists a function $f: T to T$, such that for each $t in T$, $f(t) = t$.
If we treat functions as sets, we can do this. If $T$ is a set, it can be shown that $T times T$ exists, and is the unique set which consists of all ordered pairs of elements of $T$. Then, by the axiom of specification, the set $$ f = { (a,b) in T times T | a = b }$$ also exists. Once the existence of the identity function is established, it can be shown that it satisfies the remaining requirements of the identity axiom.



However, if we treat functions as fundamentally different from sets, it seems that we need to be careful when specifying axioms surrounding them. Statements such as:




  • A function is a rule which associates each element of one set with exactly one element of another

  • If $f: A to B$, then for each $a in A$, $f(a) in B$


do not seem sufficient to show that any function exists in the first place, so we would be left unable to prove that symmetric groups are actually groups!



At the end of his post, Gowers proposes the following way of defining functions:




Here in detail is what I would say. Let $G$ (to stand for “graph”) be a subset of $A times B$ such that for every $x in A$, there is exactly one $y in B$ with $(x,y) in G$. Then we can define a function $f: A to B$ in terms of $G$ by letting $f(x)$ be the unique $y$ such that $(x,y) in G$. Moreover, every function from $A$ to $B$ can be obtained this way, since $f$ is a function we can define $G$ to be the set of all $(x,y)$ such that $y = f(x)$.




Is this sufficient to solve the problem of asserting the existence of an identity function on each set? What if $A$ is empty? Then there is no unique $y$ such that $(x,y) in G$. Going backward seems fine, since if there is no $(x,y)$ such that $y = f(x)$, then the resulting $G$ would be empty. Am I missing something here?



In summary, my questions are:




  • Is the argument that for each set $T$, an identity function on $T$ exists, valid? That is, under the framework of defining functions as sets.

  • Is the method of defining functions that Tim Gowers proposes sufficient to deal with the problem of the existence of identity functions?







group-theory functions elementary-set-theory symmetric-groups






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 24 at 11:53







E-mu

















asked Jan 22 at 20:47









E-muE-mu

797417




797417








  • 3




    $begingroup$
    The empty function is a function $emptyset to emptyset$. It's the only such function, so its symmetry group is the trivial group.
    $endgroup$
    – Randall
    Jan 22 at 21:01












  • $begingroup$
    In order to assert that, wouldn't we need to prove that the empty function is a bijection, and acts as an identity under function composition? I don't see the implication there.
    $endgroup$
    – E-mu
    Jan 22 at 21:05






  • 2




    $begingroup$
    It is, vacuously. Its inverse is itself. None of this is interesting, though.
    $endgroup$
    – Randall
    Jan 22 at 21:05








  • 1




    $begingroup$
    I do not know of any definition of functions between sets that doesn't refer to, well, sets.
    $endgroup$
    – Randall
    Jan 22 at 21:11






  • 1




    $begingroup$
    No, it's hopeless. You seem to only want to use the particular property of functions you said. Therefore, let me define, for sets $A,B$, the set of FUN-CTIONS from $A$ to $B$ as the empty set. Then it satisfies your property: If $f$ is a FUN-CTION from $A$ to $B$, blahblah. Indeed this is a universal quantifier on the empty set, blahblah does not matter. From this you won't derive that the set of FUN-CTIONS from $emptyset$ to $emptyset$ is nonempty, since it's indeed empty.
    $endgroup$
    – YCor
    Jan 22 at 23:30
















  • 3




    $begingroup$
    The empty function is a function $emptyset to emptyset$. It's the only such function, so its symmetry group is the trivial group.
    $endgroup$
    – Randall
    Jan 22 at 21:01












  • $begingroup$
    In order to assert that, wouldn't we need to prove that the empty function is a bijection, and acts as an identity under function composition? I don't see the implication there.
    $endgroup$
    – E-mu
    Jan 22 at 21:05






  • 2




    $begingroup$
    It is, vacuously. Its inverse is itself. None of this is interesting, though.
    $endgroup$
    – Randall
    Jan 22 at 21:05








  • 1




    $begingroup$
    I do not know of any definition of functions between sets that doesn't refer to, well, sets.
    $endgroup$
    – Randall
    Jan 22 at 21:11






  • 1




    $begingroup$
    No, it's hopeless. You seem to only want to use the particular property of functions you said. Therefore, let me define, for sets $A,B$, the set of FUN-CTIONS from $A$ to $B$ as the empty set. Then it satisfies your property: If $f$ is a FUN-CTION from $A$ to $B$, blahblah. Indeed this is a universal quantifier on the empty set, blahblah does not matter. From this you won't derive that the set of FUN-CTIONS from $emptyset$ to $emptyset$ is nonempty, since it's indeed empty.
    $endgroup$
    – YCor
    Jan 22 at 23:30










3




3




$begingroup$
The empty function is a function $emptyset to emptyset$. It's the only such function, so its symmetry group is the trivial group.
$endgroup$
– Randall
Jan 22 at 21:01






$begingroup$
The empty function is a function $emptyset to emptyset$. It's the only such function, so its symmetry group is the trivial group.
$endgroup$
– Randall
Jan 22 at 21:01














$begingroup$
In order to assert that, wouldn't we need to prove that the empty function is a bijection, and acts as an identity under function composition? I don't see the implication there.
$endgroup$
– E-mu
Jan 22 at 21:05




$begingroup$
In order to assert that, wouldn't we need to prove that the empty function is a bijection, and acts as an identity under function composition? I don't see the implication there.
$endgroup$
– E-mu
Jan 22 at 21:05




2




2




$begingroup$
It is, vacuously. Its inverse is itself. None of this is interesting, though.
$endgroup$
– Randall
Jan 22 at 21:05






$begingroup$
It is, vacuously. Its inverse is itself. None of this is interesting, though.
$endgroup$
– Randall
Jan 22 at 21:05






1




1




$begingroup$
I do not know of any definition of functions between sets that doesn't refer to, well, sets.
$endgroup$
– Randall
Jan 22 at 21:11




$begingroup$
I do not know of any definition of functions between sets that doesn't refer to, well, sets.
$endgroup$
– Randall
Jan 22 at 21:11




1




1




$begingroup$
No, it's hopeless. You seem to only want to use the particular property of functions you said. Therefore, let me define, for sets $A,B$, the set of FUN-CTIONS from $A$ to $B$ as the empty set. Then it satisfies your property: If $f$ is a FUN-CTION from $A$ to $B$, blahblah. Indeed this is a universal quantifier on the empty set, blahblah does not matter. From this you won't derive that the set of FUN-CTIONS from $emptyset$ to $emptyset$ is nonempty, since it's indeed empty.
$endgroup$
– YCor
Jan 22 at 23:30






$begingroup$
No, it's hopeless. You seem to only want to use the particular property of functions you said. Therefore, let me define, for sets $A,B$, the set of FUN-CTIONS from $A$ to $B$ as the empty set. Then it satisfies your property: If $f$ is a FUN-CTION from $A$ to $B$, blahblah. Indeed this is a universal quantifier on the empty set, blahblah does not matter. From this you won't derive that the set of FUN-CTIONS from $emptyset$ to $emptyset$ is nonempty, since it's indeed empty.
$endgroup$
– YCor
Jan 22 at 23:30












1 Answer
1






active

oldest

votes


















1












$begingroup$

From the tone of the question, it sounds like you want a proof that will work even in mathematical foundations which treat functions as being fundamentally distinct objects from sets. This is a typical feature of the family of mathematical foundations known as type theory.



What a type theory usually provides is a way to construct compound types and functions from basic principles. A common set of such constructions can be described as simply typed lambda calculus, in which functions are defined using a vaguely Lisp-like syntax but with type annotations -- for example, $lambda (f : X multimap Y) . lambda (g : Y multimap Z) . lambda (x : X) . g(f(x))$. Here, $X multimap Y$ denotes the type of functions from $X$ to $Y$, and for example $lambda (x:X) . g(f(x))$ represents the function with domain $X$ which takes $x mapsto g(f(x))$. Then, we have a set of "type checking" rules which are similar to formal proof rules, in which you want to prove so-called typing inferences of a form like $$X mathrm{~type}, Y mathrm{~type}, Z mathrm{~type}, f : X multimap Y, g : Y multimap Z vdash [lambda(x : X) . g(f(x))] : X multimap Z.$$



Typical rules for verifying typing inferences include things like assumption:



begin{align*} phi & in Gamma \ hline Gamma & vdash phi end{align*}



map types:



begin{align*} Gamma & vdash tau mathrm{~type} \
Gamma & vdash upsilon mathrm{~type} \
hline Gamma & vdash (tau multimap upsilon) mathrm{~type}
end{align*}

function application:



begin{align*} Gamma & vdash tau mathrm{~type} \
Gamma & vdash upsilon mathrm{~type} \
Gamma & vdash phi : tau multimap upsilon \
Gamma & vdash psi : tau \
hline
Gamma & vdash phi(psi) : upsilon
end{align*}



and abstraction:



begin{align*}
Gamma & vdash tau mathrm{~type} \
Gamma & vdash upsilon mathrm{~type} \
Gamma, x : tau & vdash phi : upsilon \
hline Gamma & vdash (lambda (x : tau) . phi) : tau multimap upsilon.
end{align*}



(Here in the last one, we require that $x$ does not appear in any formula in the context $Gamma$.) Here, $tau$ and $upsilon$ represent formulas representing a type; $phi, psi$ represent formulas representing a (possibly complex) value expression; and $x$ represents an atomic variable symbol. The statements above the line in each rule represent the hypotheses of the typing inference rule, and the statement below represents the conclusion.



In this context, it is easy to prove $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. First, by assumption, we have $X mathrm{~type}, x : X vdash x : X$; therefore, by the abstraction rule (and a couple more assumptions repeating that $X mathrm{~type}$, we get $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. This has the interpretation that whenever $X$ is a type, then $lambda (x : X) . x$ is a function from $X$ to $X$.





In fact, many type theories will also provide a fundamental way to construct an element $operatorname{except}_X : emptyset multimap X$ whenever $X$ is any type. If you interpret simply typed lambda calculus as a bare-bones programming language, then a possible interpretation of $operatorname{except}_X$ is as a function which can never be called; so the internal implementation could be some formalization of "if this function is ever called, something must have gone seriously wrong with the runtime since there should be no possible way to provide me with an element of the empty type $emptyset$ as the input argument; therefore, I will signal an exception and exit, so the function will never return to its caller and therefore the return type of this function doesn't matter".





As it happens, simply typed lambda calculus also has a straightforward interpretation in ZFC: you ignore hypotheses of the form $X mathrm{~type}$ (since all the objects you work with are sets anyway); you interpret $X multimap Y$ as usual as ${ f in P(X times Y) mid forall xin X, exists! yin Y, (x, y) in f }$; you interpret $phi : tau$ as the proposition $phi in tau$; you can interpret $operatorname{except}_X$ as $emptyset in (emptyset multimap X)$; and so on. Then all the typing inference rules listed above lead to valid implications in ZFC. In particular, the validity of the typing inference $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$ implies that $forall X, { (x, x) mid x in X } in (X multimap X)$ is a theorem of ZFC.






share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3083672%2fproving-that-textsymt-is-a-group-without-treating-functions-as-sets%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    From the tone of the question, it sounds like you want a proof that will work even in mathematical foundations which treat functions as being fundamentally distinct objects from sets. This is a typical feature of the family of mathematical foundations known as type theory.



    What a type theory usually provides is a way to construct compound types and functions from basic principles. A common set of such constructions can be described as simply typed lambda calculus, in which functions are defined using a vaguely Lisp-like syntax but with type annotations -- for example, $lambda (f : X multimap Y) . lambda (g : Y multimap Z) . lambda (x : X) . g(f(x))$. Here, $X multimap Y$ denotes the type of functions from $X$ to $Y$, and for example $lambda (x:X) . g(f(x))$ represents the function with domain $X$ which takes $x mapsto g(f(x))$. Then, we have a set of "type checking" rules which are similar to formal proof rules, in which you want to prove so-called typing inferences of a form like $$X mathrm{~type}, Y mathrm{~type}, Z mathrm{~type}, f : X multimap Y, g : Y multimap Z vdash [lambda(x : X) . g(f(x))] : X multimap Z.$$



    Typical rules for verifying typing inferences include things like assumption:



    begin{align*} phi & in Gamma \ hline Gamma & vdash phi end{align*}



    map types:



    begin{align*} Gamma & vdash tau mathrm{~type} \
    Gamma & vdash upsilon mathrm{~type} \
    hline Gamma & vdash (tau multimap upsilon) mathrm{~type}
    end{align*}

    function application:



    begin{align*} Gamma & vdash tau mathrm{~type} \
    Gamma & vdash upsilon mathrm{~type} \
    Gamma & vdash phi : tau multimap upsilon \
    Gamma & vdash psi : tau \
    hline
    Gamma & vdash phi(psi) : upsilon
    end{align*}



    and abstraction:



    begin{align*}
    Gamma & vdash tau mathrm{~type} \
    Gamma & vdash upsilon mathrm{~type} \
    Gamma, x : tau & vdash phi : upsilon \
    hline Gamma & vdash (lambda (x : tau) . phi) : tau multimap upsilon.
    end{align*}



    (Here in the last one, we require that $x$ does not appear in any formula in the context $Gamma$.) Here, $tau$ and $upsilon$ represent formulas representing a type; $phi, psi$ represent formulas representing a (possibly complex) value expression; and $x$ represents an atomic variable symbol. The statements above the line in each rule represent the hypotheses of the typing inference rule, and the statement below represents the conclusion.



    In this context, it is easy to prove $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. First, by assumption, we have $X mathrm{~type}, x : X vdash x : X$; therefore, by the abstraction rule (and a couple more assumptions repeating that $X mathrm{~type}$, we get $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. This has the interpretation that whenever $X$ is a type, then $lambda (x : X) . x$ is a function from $X$ to $X$.





    In fact, many type theories will also provide a fundamental way to construct an element $operatorname{except}_X : emptyset multimap X$ whenever $X$ is any type. If you interpret simply typed lambda calculus as a bare-bones programming language, then a possible interpretation of $operatorname{except}_X$ is as a function which can never be called; so the internal implementation could be some formalization of "if this function is ever called, something must have gone seriously wrong with the runtime since there should be no possible way to provide me with an element of the empty type $emptyset$ as the input argument; therefore, I will signal an exception and exit, so the function will never return to its caller and therefore the return type of this function doesn't matter".





    As it happens, simply typed lambda calculus also has a straightforward interpretation in ZFC: you ignore hypotheses of the form $X mathrm{~type}$ (since all the objects you work with are sets anyway); you interpret $X multimap Y$ as usual as ${ f in P(X times Y) mid forall xin X, exists! yin Y, (x, y) in f }$; you interpret $phi : tau$ as the proposition $phi in tau$; you can interpret $operatorname{except}_X$ as $emptyset in (emptyset multimap X)$; and so on. Then all the typing inference rules listed above lead to valid implications in ZFC. In particular, the validity of the typing inference $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$ implies that $forall X, { (x, x) mid x in X } in (X multimap X)$ is a theorem of ZFC.






    share|cite|improve this answer









    $endgroup$


















      1












      $begingroup$

      From the tone of the question, it sounds like you want a proof that will work even in mathematical foundations which treat functions as being fundamentally distinct objects from sets. This is a typical feature of the family of mathematical foundations known as type theory.



      What a type theory usually provides is a way to construct compound types and functions from basic principles. A common set of such constructions can be described as simply typed lambda calculus, in which functions are defined using a vaguely Lisp-like syntax but with type annotations -- for example, $lambda (f : X multimap Y) . lambda (g : Y multimap Z) . lambda (x : X) . g(f(x))$. Here, $X multimap Y$ denotes the type of functions from $X$ to $Y$, and for example $lambda (x:X) . g(f(x))$ represents the function with domain $X$ which takes $x mapsto g(f(x))$. Then, we have a set of "type checking" rules which are similar to formal proof rules, in which you want to prove so-called typing inferences of a form like $$X mathrm{~type}, Y mathrm{~type}, Z mathrm{~type}, f : X multimap Y, g : Y multimap Z vdash [lambda(x : X) . g(f(x))] : X multimap Z.$$



      Typical rules for verifying typing inferences include things like assumption:



      begin{align*} phi & in Gamma \ hline Gamma & vdash phi end{align*}



      map types:



      begin{align*} Gamma & vdash tau mathrm{~type} \
      Gamma & vdash upsilon mathrm{~type} \
      hline Gamma & vdash (tau multimap upsilon) mathrm{~type}
      end{align*}

      function application:



      begin{align*} Gamma & vdash tau mathrm{~type} \
      Gamma & vdash upsilon mathrm{~type} \
      Gamma & vdash phi : tau multimap upsilon \
      Gamma & vdash psi : tau \
      hline
      Gamma & vdash phi(psi) : upsilon
      end{align*}



      and abstraction:



      begin{align*}
      Gamma & vdash tau mathrm{~type} \
      Gamma & vdash upsilon mathrm{~type} \
      Gamma, x : tau & vdash phi : upsilon \
      hline Gamma & vdash (lambda (x : tau) . phi) : tau multimap upsilon.
      end{align*}



      (Here in the last one, we require that $x$ does not appear in any formula in the context $Gamma$.) Here, $tau$ and $upsilon$ represent formulas representing a type; $phi, psi$ represent formulas representing a (possibly complex) value expression; and $x$ represents an atomic variable symbol. The statements above the line in each rule represent the hypotheses of the typing inference rule, and the statement below represents the conclusion.



      In this context, it is easy to prove $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. First, by assumption, we have $X mathrm{~type}, x : X vdash x : X$; therefore, by the abstraction rule (and a couple more assumptions repeating that $X mathrm{~type}$, we get $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. This has the interpretation that whenever $X$ is a type, then $lambda (x : X) . x$ is a function from $X$ to $X$.





      In fact, many type theories will also provide a fundamental way to construct an element $operatorname{except}_X : emptyset multimap X$ whenever $X$ is any type. If you interpret simply typed lambda calculus as a bare-bones programming language, then a possible interpretation of $operatorname{except}_X$ is as a function which can never be called; so the internal implementation could be some formalization of "if this function is ever called, something must have gone seriously wrong with the runtime since there should be no possible way to provide me with an element of the empty type $emptyset$ as the input argument; therefore, I will signal an exception and exit, so the function will never return to its caller and therefore the return type of this function doesn't matter".





      As it happens, simply typed lambda calculus also has a straightforward interpretation in ZFC: you ignore hypotheses of the form $X mathrm{~type}$ (since all the objects you work with are sets anyway); you interpret $X multimap Y$ as usual as ${ f in P(X times Y) mid forall xin X, exists! yin Y, (x, y) in f }$; you interpret $phi : tau$ as the proposition $phi in tau$; you can interpret $operatorname{except}_X$ as $emptyset in (emptyset multimap X)$; and so on. Then all the typing inference rules listed above lead to valid implications in ZFC. In particular, the validity of the typing inference $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$ implies that $forall X, { (x, x) mid x in X } in (X multimap X)$ is a theorem of ZFC.






      share|cite|improve this answer









      $endgroup$
















        1












        1








        1





        $begingroup$

        From the tone of the question, it sounds like you want a proof that will work even in mathematical foundations which treat functions as being fundamentally distinct objects from sets. This is a typical feature of the family of mathematical foundations known as type theory.



        What a type theory usually provides is a way to construct compound types and functions from basic principles. A common set of such constructions can be described as simply typed lambda calculus, in which functions are defined using a vaguely Lisp-like syntax but with type annotations -- for example, $lambda (f : X multimap Y) . lambda (g : Y multimap Z) . lambda (x : X) . g(f(x))$. Here, $X multimap Y$ denotes the type of functions from $X$ to $Y$, and for example $lambda (x:X) . g(f(x))$ represents the function with domain $X$ which takes $x mapsto g(f(x))$. Then, we have a set of "type checking" rules which are similar to formal proof rules, in which you want to prove so-called typing inferences of a form like $$X mathrm{~type}, Y mathrm{~type}, Z mathrm{~type}, f : X multimap Y, g : Y multimap Z vdash [lambda(x : X) . g(f(x))] : X multimap Z.$$



        Typical rules for verifying typing inferences include things like assumption:



        begin{align*} phi & in Gamma \ hline Gamma & vdash phi end{align*}



        map types:



        begin{align*} Gamma & vdash tau mathrm{~type} \
        Gamma & vdash upsilon mathrm{~type} \
        hline Gamma & vdash (tau multimap upsilon) mathrm{~type}
        end{align*}

        function application:



        begin{align*} Gamma & vdash tau mathrm{~type} \
        Gamma & vdash upsilon mathrm{~type} \
        Gamma & vdash phi : tau multimap upsilon \
        Gamma & vdash psi : tau \
        hline
        Gamma & vdash phi(psi) : upsilon
        end{align*}



        and abstraction:



        begin{align*}
        Gamma & vdash tau mathrm{~type} \
        Gamma & vdash upsilon mathrm{~type} \
        Gamma, x : tau & vdash phi : upsilon \
        hline Gamma & vdash (lambda (x : tau) . phi) : tau multimap upsilon.
        end{align*}



        (Here in the last one, we require that $x$ does not appear in any formula in the context $Gamma$.) Here, $tau$ and $upsilon$ represent formulas representing a type; $phi, psi$ represent formulas representing a (possibly complex) value expression; and $x$ represents an atomic variable symbol. The statements above the line in each rule represent the hypotheses of the typing inference rule, and the statement below represents the conclusion.



        In this context, it is easy to prove $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. First, by assumption, we have $X mathrm{~type}, x : X vdash x : X$; therefore, by the abstraction rule (and a couple more assumptions repeating that $X mathrm{~type}$, we get $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. This has the interpretation that whenever $X$ is a type, then $lambda (x : X) . x$ is a function from $X$ to $X$.





        In fact, many type theories will also provide a fundamental way to construct an element $operatorname{except}_X : emptyset multimap X$ whenever $X$ is any type. If you interpret simply typed lambda calculus as a bare-bones programming language, then a possible interpretation of $operatorname{except}_X$ is as a function which can never be called; so the internal implementation could be some formalization of "if this function is ever called, something must have gone seriously wrong with the runtime since there should be no possible way to provide me with an element of the empty type $emptyset$ as the input argument; therefore, I will signal an exception and exit, so the function will never return to its caller and therefore the return type of this function doesn't matter".





        As it happens, simply typed lambda calculus also has a straightforward interpretation in ZFC: you ignore hypotheses of the form $X mathrm{~type}$ (since all the objects you work with are sets anyway); you interpret $X multimap Y$ as usual as ${ f in P(X times Y) mid forall xin X, exists! yin Y, (x, y) in f }$; you interpret $phi : tau$ as the proposition $phi in tau$; you can interpret $operatorname{except}_X$ as $emptyset in (emptyset multimap X)$; and so on. Then all the typing inference rules listed above lead to valid implications in ZFC. In particular, the validity of the typing inference $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$ implies that $forall X, { (x, x) mid x in X } in (X multimap X)$ is a theorem of ZFC.






        share|cite|improve this answer









        $endgroup$



        From the tone of the question, it sounds like you want a proof that will work even in mathematical foundations which treat functions as being fundamentally distinct objects from sets. This is a typical feature of the family of mathematical foundations known as type theory.



        What a type theory usually provides is a way to construct compound types and functions from basic principles. A common set of such constructions can be described as simply typed lambda calculus, in which functions are defined using a vaguely Lisp-like syntax but with type annotations -- for example, $lambda (f : X multimap Y) . lambda (g : Y multimap Z) . lambda (x : X) . g(f(x))$. Here, $X multimap Y$ denotes the type of functions from $X$ to $Y$, and for example $lambda (x:X) . g(f(x))$ represents the function with domain $X$ which takes $x mapsto g(f(x))$. Then, we have a set of "type checking" rules which are similar to formal proof rules, in which you want to prove so-called typing inferences of a form like $$X mathrm{~type}, Y mathrm{~type}, Z mathrm{~type}, f : X multimap Y, g : Y multimap Z vdash [lambda(x : X) . g(f(x))] : X multimap Z.$$



        Typical rules for verifying typing inferences include things like assumption:



        begin{align*} phi & in Gamma \ hline Gamma & vdash phi end{align*}



        map types:



        begin{align*} Gamma & vdash tau mathrm{~type} \
        Gamma & vdash upsilon mathrm{~type} \
        hline Gamma & vdash (tau multimap upsilon) mathrm{~type}
        end{align*}

        function application:



        begin{align*} Gamma & vdash tau mathrm{~type} \
        Gamma & vdash upsilon mathrm{~type} \
        Gamma & vdash phi : tau multimap upsilon \
        Gamma & vdash psi : tau \
        hline
        Gamma & vdash phi(psi) : upsilon
        end{align*}



        and abstraction:



        begin{align*}
        Gamma & vdash tau mathrm{~type} \
        Gamma & vdash upsilon mathrm{~type} \
        Gamma, x : tau & vdash phi : upsilon \
        hline Gamma & vdash (lambda (x : tau) . phi) : tau multimap upsilon.
        end{align*}



        (Here in the last one, we require that $x$ does not appear in any formula in the context $Gamma$.) Here, $tau$ and $upsilon$ represent formulas representing a type; $phi, psi$ represent formulas representing a (possibly complex) value expression; and $x$ represents an atomic variable symbol. The statements above the line in each rule represent the hypotheses of the typing inference rule, and the statement below represents the conclusion.



        In this context, it is easy to prove $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. First, by assumption, we have $X mathrm{~type}, x : X vdash x : X$; therefore, by the abstraction rule (and a couple more assumptions repeating that $X mathrm{~type}$, we get $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$. This has the interpretation that whenever $X$ is a type, then $lambda (x : X) . x$ is a function from $X$ to $X$.





        In fact, many type theories will also provide a fundamental way to construct an element $operatorname{except}_X : emptyset multimap X$ whenever $X$ is any type. If you interpret simply typed lambda calculus as a bare-bones programming language, then a possible interpretation of $operatorname{except}_X$ is as a function which can never be called; so the internal implementation could be some formalization of "if this function is ever called, something must have gone seriously wrong with the runtime since there should be no possible way to provide me with an element of the empty type $emptyset$ as the input argument; therefore, I will signal an exception and exit, so the function will never return to its caller and therefore the return type of this function doesn't matter".





        As it happens, simply typed lambda calculus also has a straightforward interpretation in ZFC: you ignore hypotheses of the form $X mathrm{~type}$ (since all the objects you work with are sets anyway); you interpret $X multimap Y$ as usual as ${ f in P(X times Y) mid forall xin X, exists! yin Y, (x, y) in f }$; you interpret $phi : tau$ as the proposition $phi in tau$; you can interpret $operatorname{except}_X$ as $emptyset in (emptyset multimap X)$; and so on. Then all the typing inference rules listed above lead to valid implications in ZFC. In particular, the validity of the typing inference $X mathrm{~type} vdash (lambda (x : X) . x) : X multimap X$ implies that $forall X, { (x, x) mid x in X } in (X multimap X)$ is a theorem of ZFC.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 22 at 23:16









        Daniel ScheplerDaniel Schepler

        9,0841721




        9,0841721






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3083672%2fproving-that-textsymt-is-a-group-without-treating-functions-as-sets%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

            Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

            A Topological Invariant for $pi_3(U(n))$