Tensorflow Hub module reuse












0















Say I want to use a specific module (text embeddings) from TF Hub to create two distinct models, that I would then like to export and serve.



Option 1:
Import the module for each model, put each classifier on top, and export 2 models; serve each in its own docker container. These models contain both the underlying embedding module and the classifier.



Option 2:
Serve the module itself, and have its output go to 2 different served models, that themselves do not contain the embeddings. (Is this even possible?)



My computer science background tells me that option 2 is better, since we are re-using the original embeddings module for both models, also decoupling the models themselves from the embeddings module.



However, from a practical standpoint, when a data scientist is coding, they are importing the module and training with the classifier on top of it, so it becomes cumbersome having to export the model itself without the underlying embeddings.



Can anyone point me in the right direction? Hopefully my question makes sense, I am not a data scientist myself, I am coming more from a development background.



Thanks










share|improve this question



























    0















    Say I want to use a specific module (text embeddings) from TF Hub to create two distinct models, that I would then like to export and serve.



    Option 1:
    Import the module for each model, put each classifier on top, and export 2 models; serve each in its own docker container. These models contain both the underlying embedding module and the classifier.



    Option 2:
    Serve the module itself, and have its output go to 2 different served models, that themselves do not contain the embeddings. (Is this even possible?)



    My computer science background tells me that option 2 is better, since we are re-using the original embeddings module for both models, also decoupling the models themselves from the embeddings module.



    However, from a practical standpoint, when a data scientist is coding, they are importing the module and training with the classifier on top of it, so it becomes cumbersome having to export the model itself without the underlying embeddings.



    Can anyone point me in the right direction? Hopefully my question makes sense, I am not a data scientist myself, I am coming more from a development background.



    Thanks










    share|improve this question

























      0












      0








      0








      Say I want to use a specific module (text embeddings) from TF Hub to create two distinct models, that I would then like to export and serve.



      Option 1:
      Import the module for each model, put each classifier on top, and export 2 models; serve each in its own docker container. These models contain both the underlying embedding module and the classifier.



      Option 2:
      Serve the module itself, and have its output go to 2 different served models, that themselves do not contain the embeddings. (Is this even possible?)



      My computer science background tells me that option 2 is better, since we are re-using the original embeddings module for both models, also decoupling the models themselves from the embeddings module.



      However, from a practical standpoint, when a data scientist is coding, they are importing the module and training with the classifier on top of it, so it becomes cumbersome having to export the model itself without the underlying embeddings.



      Can anyone point me in the right direction? Hopefully my question makes sense, I am not a data scientist myself, I am coming more from a development background.



      Thanks










      share|improve this question














      Say I want to use a specific module (text embeddings) from TF Hub to create two distinct models, that I would then like to export and serve.



      Option 1:
      Import the module for each model, put each classifier on top, and export 2 models; serve each in its own docker container. These models contain both the underlying embedding module and the classifier.



      Option 2:
      Serve the module itself, and have its output go to 2 different served models, that themselves do not contain the embeddings. (Is this even possible?)



      My computer science background tells me that option 2 is better, since we are re-using the original embeddings module for both models, also decoupling the models themselves from the embeddings module.



      However, from a practical standpoint, when a data scientist is coding, they are importing the module and training with the classifier on top of it, so it becomes cumbersome having to export the model itself without the underlying embeddings.



      Can anyone point me in the right direction? Hopefully my question makes sense, I am not a data scientist myself, I am coming more from a development background.



      Thanks







      tensorflow tensorflow-serving tensorflow-hub






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 21 '18 at 20:43









      TiberiuTiberiu

      177316




      177316
























          1 Answer
          1






          active

          oldest

          votes


















          1














          Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.



          Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53420194%2ftensorflow-hub-module-reuse%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.



            Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.






            share|improve this answer




























              1














              Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.



              Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.






              share|improve this answer


























                1












                1








                1







                Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.



                Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.






                share|improve this answer













                Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.



                Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 27 '18 at 10:08









                arnoegwarnoegw

                24111




                24111
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53420194%2ftensorflow-hub-module-reuse%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

                    Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

                    A Topological Invariant for $pi_3(U(n))$