Why are homomorphisms of graphs/relations defined the way they are?












1












$begingroup$


Dealing with relations and functions in a set theoretic context i.e. as just sets of ordered pairs, note for any relations $R$ and $L$ that for any given function $f$ we have:



$$forall x,yin text{dom}(f)left(xRyimplies f(x)Lf(y)right)iff f^{-1}circ Lcirc fsupseteq Rcaptext{dom}(f)^2$$
$$forall x,yin text{dom}(f)left(f(x)Lf(y)implies xRyright)iff f^{-1}circ Lcirc fsubseteq Rcaptext{dom}(f)^2$$
$$forall x,yin text{dom}(f)left(xRyiff f(x)Lf(y)right)iff f^{-1}circ Lcirc f=Rcaptext{dom}(f)^2$$



With that said, why define the first one to be say a homomorphism in the context of graph theory?



Are the latter two notions less important then the first one? If so, why?



Reading the page on nlab: https://ncatlab.org/nlab/show/relation#morphisms it says that:




In general, it's more natural to require only preservation; these are the morphisms you get if you consider a set with a relation as a models of a finite-limit theory or a simply directed graph.




Though I don't really understand why this makes the first notion more natural, if someone could elaborate on why this is more natural for me I would really appreciate it.










share|cite|improve this question











$endgroup$

















    1












    $begingroup$


    Dealing with relations and functions in a set theoretic context i.e. as just sets of ordered pairs, note for any relations $R$ and $L$ that for any given function $f$ we have:



    $$forall x,yin text{dom}(f)left(xRyimplies f(x)Lf(y)right)iff f^{-1}circ Lcirc fsupseteq Rcaptext{dom}(f)^2$$
    $$forall x,yin text{dom}(f)left(f(x)Lf(y)implies xRyright)iff f^{-1}circ Lcirc fsubseteq Rcaptext{dom}(f)^2$$
    $$forall x,yin text{dom}(f)left(xRyiff f(x)Lf(y)right)iff f^{-1}circ Lcirc f=Rcaptext{dom}(f)^2$$



    With that said, why define the first one to be say a homomorphism in the context of graph theory?



    Are the latter two notions less important then the first one? If so, why?



    Reading the page on nlab: https://ncatlab.org/nlab/show/relation#morphisms it says that:




    In general, it's more natural to require only preservation; these are the morphisms you get if you consider a set with a relation as a models of a finite-limit theory or a simply directed graph.




    Though I don't really understand why this makes the first notion more natural, if someone could elaborate on why this is more natural for me I would really appreciate it.










    share|cite|improve this question











    $endgroup$















      1












      1








      1





      $begingroup$


      Dealing with relations and functions in a set theoretic context i.e. as just sets of ordered pairs, note for any relations $R$ and $L$ that for any given function $f$ we have:



      $$forall x,yin text{dom}(f)left(xRyimplies f(x)Lf(y)right)iff f^{-1}circ Lcirc fsupseteq Rcaptext{dom}(f)^2$$
      $$forall x,yin text{dom}(f)left(f(x)Lf(y)implies xRyright)iff f^{-1}circ Lcirc fsubseteq Rcaptext{dom}(f)^2$$
      $$forall x,yin text{dom}(f)left(xRyiff f(x)Lf(y)right)iff f^{-1}circ Lcirc f=Rcaptext{dom}(f)^2$$



      With that said, why define the first one to be say a homomorphism in the context of graph theory?



      Are the latter two notions less important then the first one? If so, why?



      Reading the page on nlab: https://ncatlab.org/nlab/show/relation#morphisms it says that:




      In general, it's more natural to require only preservation; these are the morphisms you get if you consider a set with a relation as a models of a finite-limit theory or a simply directed graph.




      Though I don't really understand why this makes the first notion more natural, if someone could elaborate on why this is more natural for me I would really appreciate it.










      share|cite|improve this question











      $endgroup$




      Dealing with relations and functions in a set theoretic context i.e. as just sets of ordered pairs, note for any relations $R$ and $L$ that for any given function $f$ we have:



      $$forall x,yin text{dom}(f)left(xRyimplies f(x)Lf(y)right)iff f^{-1}circ Lcirc fsupseteq Rcaptext{dom}(f)^2$$
      $$forall x,yin text{dom}(f)left(f(x)Lf(y)implies xRyright)iff f^{-1}circ Lcirc fsubseteq Rcaptext{dom}(f)^2$$
      $$forall x,yin text{dom}(f)left(xRyiff f(x)Lf(y)right)iff f^{-1}circ Lcirc f=Rcaptext{dom}(f)^2$$



      With that said, why define the first one to be say a homomorphism in the context of graph theory?



      Are the latter two notions less important then the first one? If so, why?



      Reading the page on nlab: https://ncatlab.org/nlab/show/relation#morphisms it says that:




      In general, it's more natural to require only preservation; these are the morphisms you get if you consider a set with a relation as a models of a finite-limit theory or a simply directed graph.




      Though I don't really understand why this makes the first notion more natural, if someone could elaborate on why this is more natural for me I would really appreciate it.







      graph-theory relations relation-algebra






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 21 at 23:19







      Ethan

















      asked Jan 21 at 19:54









      EthanEthan

      6,89012164




      6,89012164






















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          To begin with, homomorphisms of graphs are not as important to graph theory as, for example, homomorphisms of groups are important to group theory. They show up occasionally, but I think people coming from an abstract algebra background tend to overemphasize them.



          It's much more frequent to encounter subgraphs. We can say that $G$ is a subgraph of $H$ iff there is an injective homomorphism from $G$ to $H$, which motivates your first notion. That being said, an injective $f : V(G) to V(H)$ that satisfies your third property gives us an induced copy of $G$ in $H$, so that one's not too bad either.



          There are at least two more important notions in graph theory that are captured by graph homomorphisms:




          • A walk of length $n$ in $G$ is a homomorphism from a path graph with $n$ edges to $G$.

          • A $k$-coloring of $G$ is a homomorphism from $G$ to a clique $K_k$.


          So we like the first notion best because it has the most applications to things we actually care about.



          Your second notion is not interesting for graphs in particular, because it doesn't add anything new: a function $f: V(G) to V(H)$ satisfies the second property iff $f$ is a homomorphism from the complement $overline{G}$ to the complement $overline{H}$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            In regards to your last comment, wouldn't that just mean the second notion is equally strong as the first one? Since you can express either in terms of the other, using graph complements.
            $endgroup$
            – Ethan
            Jan 21 at 23:31










          • $begingroup$
            Right - that comment is not an argument for why the second notion is worse than the first, just an argument for why we shouldn't keep both.
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:01










          • $begingroup$
            And since the third can be expressed in terms of one and two, which can be expressed in terms of either one or two, all three can then be expressed in terms of the first notion or the second notion, so it suffices to study just one or two if we want to study all three? Is that the underlying argument?
            $endgroup$
            – Ethan
            Jan 22 at 0:05










          • $begingroup$
            Most of my argument is that the first notion is the one that actually appears in things we want to describe in practice. But that's certainly a factor, yes. (If the second notion appeared very often, though, we'd keep it around even though it can be expressed in terms of the first. For example, we have a word for closed sets even though they're just complements of open sets.)
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:07





















          2












          $begingroup$

          I think the nLab passage is making things more mysterious than they need to be: the key point is that this definition of homomorphism is the one which generalizes the usual notion of homomorphism of functional structures appropriately.



          Let's consider, say, magma homomorphisms. A magma is just a set with a binary operation; given magmas $mathcal{A}=(A,*)$ and $mathcal{B}=(B, +)$, a homomorphism $h$ from $mathcal{A}$ to $mathcal{B}$ "obviously" needs to satisfy $$h(x*y)=h(x)+h(y).$$



          Now, the point is that functions can be thought of as special kinds of relations. Specifically, a function $f$ is "equivalent to" in some sense its graph, ${(a_1,...,a_n,b): f(a_1,...,a_n)=b}$. When we "relationize" a magma, we're looking at a set with a ternary relation $R$ on it, and the homomorphism condition above turns into $$R_*(x,y,z)implies R_+(h(x),h(y),h(z)).$$ That is, equality preservation corresponds to relation preservation. Meanwhile, the above implication clearly doesn't reverse in general.



          This motivates the definition of homomorphism, not just for graphs, but for arbitrary first-order structures: preserve equality of terms, and preserve instances (but not necessarily non-instances) of relations.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            If we write $R_{*}(x,y,z)iff x*y=z$ and $R_{+}(a,b,c)iff a+b=c$ then its clearly to me that if $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$ then we have $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$. However I don't see how having $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$ alone, would imply $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$. Am I misunderstanding what you wrote?
            $endgroup$
            – Ethan
            Jan 21 at 23:17











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082327%2fwhy-are-homomorphisms-of-graphs-relations-defined-the-way-they-are%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          To begin with, homomorphisms of graphs are not as important to graph theory as, for example, homomorphisms of groups are important to group theory. They show up occasionally, but I think people coming from an abstract algebra background tend to overemphasize them.



          It's much more frequent to encounter subgraphs. We can say that $G$ is a subgraph of $H$ iff there is an injective homomorphism from $G$ to $H$, which motivates your first notion. That being said, an injective $f : V(G) to V(H)$ that satisfies your third property gives us an induced copy of $G$ in $H$, so that one's not too bad either.



          There are at least two more important notions in graph theory that are captured by graph homomorphisms:




          • A walk of length $n$ in $G$ is a homomorphism from a path graph with $n$ edges to $G$.

          • A $k$-coloring of $G$ is a homomorphism from $G$ to a clique $K_k$.


          So we like the first notion best because it has the most applications to things we actually care about.



          Your second notion is not interesting for graphs in particular, because it doesn't add anything new: a function $f: V(G) to V(H)$ satisfies the second property iff $f$ is a homomorphism from the complement $overline{G}$ to the complement $overline{H}$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            In regards to your last comment, wouldn't that just mean the second notion is equally strong as the first one? Since you can express either in terms of the other, using graph complements.
            $endgroup$
            – Ethan
            Jan 21 at 23:31










          • $begingroup$
            Right - that comment is not an argument for why the second notion is worse than the first, just an argument for why we shouldn't keep both.
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:01










          • $begingroup$
            And since the third can be expressed in terms of one and two, which can be expressed in terms of either one or two, all three can then be expressed in terms of the first notion or the second notion, so it suffices to study just one or two if we want to study all three? Is that the underlying argument?
            $endgroup$
            – Ethan
            Jan 22 at 0:05










          • $begingroup$
            Most of my argument is that the first notion is the one that actually appears in things we want to describe in practice. But that's certainly a factor, yes. (If the second notion appeared very often, though, we'd keep it around even though it can be expressed in terms of the first. For example, we have a word for closed sets even though they're just complements of open sets.)
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:07


















          1












          $begingroup$

          To begin with, homomorphisms of graphs are not as important to graph theory as, for example, homomorphisms of groups are important to group theory. They show up occasionally, but I think people coming from an abstract algebra background tend to overemphasize them.



          It's much more frequent to encounter subgraphs. We can say that $G$ is a subgraph of $H$ iff there is an injective homomorphism from $G$ to $H$, which motivates your first notion. That being said, an injective $f : V(G) to V(H)$ that satisfies your third property gives us an induced copy of $G$ in $H$, so that one's not too bad either.



          There are at least two more important notions in graph theory that are captured by graph homomorphisms:




          • A walk of length $n$ in $G$ is a homomorphism from a path graph with $n$ edges to $G$.

          • A $k$-coloring of $G$ is a homomorphism from $G$ to a clique $K_k$.


          So we like the first notion best because it has the most applications to things we actually care about.



          Your second notion is not interesting for graphs in particular, because it doesn't add anything new: a function $f: V(G) to V(H)$ satisfies the second property iff $f$ is a homomorphism from the complement $overline{G}$ to the complement $overline{H}$.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            In regards to your last comment, wouldn't that just mean the second notion is equally strong as the first one? Since you can express either in terms of the other, using graph complements.
            $endgroup$
            – Ethan
            Jan 21 at 23:31










          • $begingroup$
            Right - that comment is not an argument for why the second notion is worse than the first, just an argument for why we shouldn't keep both.
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:01










          • $begingroup$
            And since the third can be expressed in terms of one and two, which can be expressed in terms of either one or two, all three can then be expressed in terms of the first notion or the second notion, so it suffices to study just one or two if we want to study all three? Is that the underlying argument?
            $endgroup$
            – Ethan
            Jan 22 at 0:05










          • $begingroup$
            Most of my argument is that the first notion is the one that actually appears in things we want to describe in practice. But that's certainly a factor, yes. (If the second notion appeared very often, though, we'd keep it around even though it can be expressed in terms of the first. For example, we have a word for closed sets even though they're just complements of open sets.)
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:07
















          1












          1








          1





          $begingroup$

          To begin with, homomorphisms of graphs are not as important to graph theory as, for example, homomorphisms of groups are important to group theory. They show up occasionally, but I think people coming from an abstract algebra background tend to overemphasize them.



          It's much more frequent to encounter subgraphs. We can say that $G$ is a subgraph of $H$ iff there is an injective homomorphism from $G$ to $H$, which motivates your first notion. That being said, an injective $f : V(G) to V(H)$ that satisfies your third property gives us an induced copy of $G$ in $H$, so that one's not too bad either.



          There are at least two more important notions in graph theory that are captured by graph homomorphisms:




          • A walk of length $n$ in $G$ is a homomorphism from a path graph with $n$ edges to $G$.

          • A $k$-coloring of $G$ is a homomorphism from $G$ to a clique $K_k$.


          So we like the first notion best because it has the most applications to things we actually care about.



          Your second notion is not interesting for graphs in particular, because it doesn't add anything new: a function $f: V(G) to V(H)$ satisfies the second property iff $f$ is a homomorphism from the complement $overline{G}$ to the complement $overline{H}$.






          share|cite|improve this answer









          $endgroup$



          To begin with, homomorphisms of graphs are not as important to graph theory as, for example, homomorphisms of groups are important to group theory. They show up occasionally, but I think people coming from an abstract algebra background tend to overemphasize them.



          It's much more frequent to encounter subgraphs. We can say that $G$ is a subgraph of $H$ iff there is an injective homomorphism from $G$ to $H$, which motivates your first notion. That being said, an injective $f : V(G) to V(H)$ that satisfies your third property gives us an induced copy of $G$ in $H$, so that one's not too bad either.



          There are at least two more important notions in graph theory that are captured by graph homomorphisms:




          • A walk of length $n$ in $G$ is a homomorphism from a path graph with $n$ edges to $G$.

          • A $k$-coloring of $G$ is a homomorphism from $G$ to a clique $K_k$.


          So we like the first notion best because it has the most applications to things we actually care about.



          Your second notion is not interesting for graphs in particular, because it doesn't add anything new: a function $f: V(G) to V(H)$ satisfies the second property iff $f$ is a homomorphism from the complement $overline{G}$ to the complement $overline{H}$.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jan 21 at 20:39









          Misha LavrovMisha Lavrov

          47.3k657107




          47.3k657107












          • $begingroup$
            In regards to your last comment, wouldn't that just mean the second notion is equally strong as the first one? Since you can express either in terms of the other, using graph complements.
            $endgroup$
            – Ethan
            Jan 21 at 23:31










          • $begingroup$
            Right - that comment is not an argument for why the second notion is worse than the first, just an argument for why we shouldn't keep both.
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:01










          • $begingroup$
            And since the third can be expressed in terms of one and two, which can be expressed in terms of either one or two, all three can then be expressed in terms of the first notion or the second notion, so it suffices to study just one or two if we want to study all three? Is that the underlying argument?
            $endgroup$
            – Ethan
            Jan 22 at 0:05










          • $begingroup$
            Most of my argument is that the first notion is the one that actually appears in things we want to describe in practice. But that's certainly a factor, yes. (If the second notion appeared very often, though, we'd keep it around even though it can be expressed in terms of the first. For example, we have a word for closed sets even though they're just complements of open sets.)
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:07




















          • $begingroup$
            In regards to your last comment, wouldn't that just mean the second notion is equally strong as the first one? Since you can express either in terms of the other, using graph complements.
            $endgroup$
            – Ethan
            Jan 21 at 23:31










          • $begingroup$
            Right - that comment is not an argument for why the second notion is worse than the first, just an argument for why we shouldn't keep both.
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:01










          • $begingroup$
            And since the third can be expressed in terms of one and two, which can be expressed in terms of either one or two, all three can then be expressed in terms of the first notion or the second notion, so it suffices to study just one or two if we want to study all three? Is that the underlying argument?
            $endgroup$
            – Ethan
            Jan 22 at 0:05










          • $begingroup$
            Most of my argument is that the first notion is the one that actually appears in things we want to describe in practice. But that's certainly a factor, yes. (If the second notion appeared very often, though, we'd keep it around even though it can be expressed in terms of the first. For example, we have a word for closed sets even though they're just complements of open sets.)
            $endgroup$
            – Misha Lavrov
            Jan 22 at 0:07


















          $begingroup$
          In regards to your last comment, wouldn't that just mean the second notion is equally strong as the first one? Since you can express either in terms of the other, using graph complements.
          $endgroup$
          – Ethan
          Jan 21 at 23:31




          $begingroup$
          In regards to your last comment, wouldn't that just mean the second notion is equally strong as the first one? Since you can express either in terms of the other, using graph complements.
          $endgroup$
          – Ethan
          Jan 21 at 23:31












          $begingroup$
          Right - that comment is not an argument for why the second notion is worse than the first, just an argument for why we shouldn't keep both.
          $endgroup$
          – Misha Lavrov
          Jan 22 at 0:01




          $begingroup$
          Right - that comment is not an argument for why the second notion is worse than the first, just an argument for why we shouldn't keep both.
          $endgroup$
          – Misha Lavrov
          Jan 22 at 0:01












          $begingroup$
          And since the third can be expressed in terms of one and two, which can be expressed in terms of either one or two, all three can then be expressed in terms of the first notion or the second notion, so it suffices to study just one or two if we want to study all three? Is that the underlying argument?
          $endgroup$
          – Ethan
          Jan 22 at 0:05




          $begingroup$
          And since the third can be expressed in terms of one and two, which can be expressed in terms of either one or two, all three can then be expressed in terms of the first notion or the second notion, so it suffices to study just one or two if we want to study all three? Is that the underlying argument?
          $endgroup$
          – Ethan
          Jan 22 at 0:05












          $begingroup$
          Most of my argument is that the first notion is the one that actually appears in things we want to describe in practice. But that's certainly a factor, yes. (If the second notion appeared very often, though, we'd keep it around even though it can be expressed in terms of the first. For example, we have a word for closed sets even though they're just complements of open sets.)
          $endgroup$
          – Misha Lavrov
          Jan 22 at 0:07






          $begingroup$
          Most of my argument is that the first notion is the one that actually appears in things we want to describe in practice. But that's certainly a factor, yes. (If the second notion appeared very often, though, we'd keep it around even though it can be expressed in terms of the first. For example, we have a word for closed sets even though they're just complements of open sets.)
          $endgroup$
          – Misha Lavrov
          Jan 22 at 0:07













          2












          $begingroup$

          I think the nLab passage is making things more mysterious than they need to be: the key point is that this definition of homomorphism is the one which generalizes the usual notion of homomorphism of functional structures appropriately.



          Let's consider, say, magma homomorphisms. A magma is just a set with a binary operation; given magmas $mathcal{A}=(A,*)$ and $mathcal{B}=(B, +)$, a homomorphism $h$ from $mathcal{A}$ to $mathcal{B}$ "obviously" needs to satisfy $$h(x*y)=h(x)+h(y).$$



          Now, the point is that functions can be thought of as special kinds of relations. Specifically, a function $f$ is "equivalent to" in some sense its graph, ${(a_1,...,a_n,b): f(a_1,...,a_n)=b}$. When we "relationize" a magma, we're looking at a set with a ternary relation $R$ on it, and the homomorphism condition above turns into $$R_*(x,y,z)implies R_+(h(x),h(y),h(z)).$$ That is, equality preservation corresponds to relation preservation. Meanwhile, the above implication clearly doesn't reverse in general.



          This motivates the definition of homomorphism, not just for graphs, but for arbitrary first-order structures: preserve equality of terms, and preserve instances (but not necessarily non-instances) of relations.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            If we write $R_{*}(x,y,z)iff x*y=z$ and $R_{+}(a,b,c)iff a+b=c$ then its clearly to me that if $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$ then we have $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$. However I don't see how having $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$ alone, would imply $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$. Am I misunderstanding what you wrote?
            $endgroup$
            – Ethan
            Jan 21 at 23:17
















          2












          $begingroup$

          I think the nLab passage is making things more mysterious than they need to be: the key point is that this definition of homomorphism is the one which generalizes the usual notion of homomorphism of functional structures appropriately.



          Let's consider, say, magma homomorphisms. A magma is just a set with a binary operation; given magmas $mathcal{A}=(A,*)$ and $mathcal{B}=(B, +)$, a homomorphism $h$ from $mathcal{A}$ to $mathcal{B}$ "obviously" needs to satisfy $$h(x*y)=h(x)+h(y).$$



          Now, the point is that functions can be thought of as special kinds of relations. Specifically, a function $f$ is "equivalent to" in some sense its graph, ${(a_1,...,a_n,b): f(a_1,...,a_n)=b}$. When we "relationize" a magma, we're looking at a set with a ternary relation $R$ on it, and the homomorphism condition above turns into $$R_*(x,y,z)implies R_+(h(x),h(y),h(z)).$$ That is, equality preservation corresponds to relation preservation. Meanwhile, the above implication clearly doesn't reverse in general.



          This motivates the definition of homomorphism, not just for graphs, but for arbitrary first-order structures: preserve equality of terms, and preserve instances (but not necessarily non-instances) of relations.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            If we write $R_{*}(x,y,z)iff x*y=z$ and $R_{+}(a,b,c)iff a+b=c$ then its clearly to me that if $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$ then we have $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$. However I don't see how having $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$ alone, would imply $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$. Am I misunderstanding what you wrote?
            $endgroup$
            – Ethan
            Jan 21 at 23:17














          2












          2








          2





          $begingroup$

          I think the nLab passage is making things more mysterious than they need to be: the key point is that this definition of homomorphism is the one which generalizes the usual notion of homomorphism of functional structures appropriately.



          Let's consider, say, magma homomorphisms. A magma is just a set with a binary operation; given magmas $mathcal{A}=(A,*)$ and $mathcal{B}=(B, +)$, a homomorphism $h$ from $mathcal{A}$ to $mathcal{B}$ "obviously" needs to satisfy $$h(x*y)=h(x)+h(y).$$



          Now, the point is that functions can be thought of as special kinds of relations. Specifically, a function $f$ is "equivalent to" in some sense its graph, ${(a_1,...,a_n,b): f(a_1,...,a_n)=b}$. When we "relationize" a magma, we're looking at a set with a ternary relation $R$ on it, and the homomorphism condition above turns into $$R_*(x,y,z)implies R_+(h(x),h(y),h(z)).$$ That is, equality preservation corresponds to relation preservation. Meanwhile, the above implication clearly doesn't reverse in general.



          This motivates the definition of homomorphism, not just for graphs, but for arbitrary first-order structures: preserve equality of terms, and preserve instances (but not necessarily non-instances) of relations.






          share|cite|improve this answer









          $endgroup$



          I think the nLab passage is making things more mysterious than they need to be: the key point is that this definition of homomorphism is the one which generalizes the usual notion of homomorphism of functional structures appropriately.



          Let's consider, say, magma homomorphisms. A magma is just a set with a binary operation; given magmas $mathcal{A}=(A,*)$ and $mathcal{B}=(B, +)$, a homomorphism $h$ from $mathcal{A}$ to $mathcal{B}$ "obviously" needs to satisfy $$h(x*y)=h(x)+h(y).$$



          Now, the point is that functions can be thought of as special kinds of relations. Specifically, a function $f$ is "equivalent to" in some sense its graph, ${(a_1,...,a_n,b): f(a_1,...,a_n)=b}$. When we "relationize" a magma, we're looking at a set with a ternary relation $R$ on it, and the homomorphism condition above turns into $$R_*(x,y,z)implies R_+(h(x),h(y),h(z)).$$ That is, equality preservation corresponds to relation preservation. Meanwhile, the above implication clearly doesn't reverse in general.



          This motivates the definition of homomorphism, not just for graphs, but for arbitrary first-order structures: preserve equality of terms, and preserve instances (but not necessarily non-instances) of relations.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jan 21 at 20:42









          Noah SchweberNoah Schweber

          126k10151290




          126k10151290












          • $begingroup$
            If we write $R_{*}(x,y,z)iff x*y=z$ and $R_{+}(a,b,c)iff a+b=c$ then its clearly to me that if $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$ then we have $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$. However I don't see how having $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$ alone, would imply $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$. Am I misunderstanding what you wrote?
            $endgroup$
            – Ethan
            Jan 21 at 23:17


















          • $begingroup$
            If we write $R_{*}(x,y,z)iff x*y=z$ and $R_{+}(a,b,c)iff a+b=c$ then its clearly to me that if $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$ then we have $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$. However I don't see how having $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$ alone, would imply $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$. Am I misunderstanding what you wrote?
            $endgroup$
            – Ethan
            Jan 21 at 23:17
















          $begingroup$
          If we write $R_{*}(x,y,z)iff x*y=z$ and $R_{+}(a,b,c)iff a+b=c$ then its clearly to me that if $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$ then we have $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$. However I don't see how having $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$ alone, would imply $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$. Am I misunderstanding what you wrote?
          $endgroup$
          – Ethan
          Jan 21 at 23:17




          $begingroup$
          If we write $R_{*}(x,y,z)iff x*y=z$ and $R_{+}(a,b,c)iff a+b=c$ then its clearly to me that if $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$ then we have $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$. However I don't see how having $R_{*}(x,y,z)implies R_{+}(h(x),h(y),h(z))$ alone, would imply $h$ is a homomorphism from $mathcal{A}$ to $mathcal{B}$. Am I misunderstanding what you wrote?
          $endgroup$
          – Ethan
          Jan 21 at 23:17


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082327%2fwhy-are-homomorphisms-of-graphs-relations-defined-the-way-they-are%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          MongoDB - Not Authorized To Execute Command

          How to fix TextFormField cause rebuild widget in Flutter

          in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith