Tensorflow Hub module reuse
Say I want to use a specific module
(text embeddings) from TF Hub to create two distinct models
, that I would then like to export and serve.
Option 1:
Import the module
for each model
, put each classifier on top, and export 2 models
; serve each in its own docker container. These models
contain both the underlying embedding module and the classifier.
Option 2:
Serve the module
itself, and have its output go to 2 different served models
, that themselves do not contain the embeddings. (Is this even possible?)
My computer science background tells me that option 2 is better, since we are re-using the original embeddings module for both models, also decoupling the models themselves from the embeddings module.
However, from a practical standpoint, when a data scientist is coding, they are importing the module
and training with the classifier on top of it, so it becomes cumbersome having to export the model
itself without the underlying embeddings.
Can anyone point me in the right direction? Hopefully my question makes sense, I am not a data scientist myself, I am coming more from a development background.
Thanks
tensorflow tensorflow-serving tensorflow-hub
add a comment |
Say I want to use a specific module
(text embeddings) from TF Hub to create two distinct models
, that I would then like to export and serve.
Option 1:
Import the module
for each model
, put each classifier on top, and export 2 models
; serve each in its own docker container. These models
contain both the underlying embedding module and the classifier.
Option 2:
Serve the module
itself, and have its output go to 2 different served models
, that themselves do not contain the embeddings. (Is this even possible?)
My computer science background tells me that option 2 is better, since we are re-using the original embeddings module for both models, also decoupling the models themselves from the embeddings module.
However, from a practical standpoint, when a data scientist is coding, they are importing the module
and training with the classifier on top of it, so it becomes cumbersome having to export the model
itself without the underlying embeddings.
Can anyone point me in the right direction? Hopefully my question makes sense, I am not a data scientist myself, I am coming more from a development background.
Thanks
tensorflow tensorflow-serving tensorflow-hub
add a comment |
Say I want to use a specific module
(text embeddings) from TF Hub to create two distinct models
, that I would then like to export and serve.
Option 1:
Import the module
for each model
, put each classifier on top, and export 2 models
; serve each in its own docker container. These models
contain both the underlying embedding module and the classifier.
Option 2:
Serve the module
itself, and have its output go to 2 different served models
, that themselves do not contain the embeddings. (Is this even possible?)
My computer science background tells me that option 2 is better, since we are re-using the original embeddings module for both models, also decoupling the models themselves from the embeddings module.
However, from a practical standpoint, when a data scientist is coding, they are importing the module
and training with the classifier on top of it, so it becomes cumbersome having to export the model
itself without the underlying embeddings.
Can anyone point me in the right direction? Hopefully my question makes sense, I am not a data scientist myself, I am coming more from a development background.
Thanks
tensorflow tensorflow-serving tensorflow-hub
Say I want to use a specific module
(text embeddings) from TF Hub to create two distinct models
, that I would then like to export and serve.
Option 1:
Import the module
for each model
, put each classifier on top, and export 2 models
; serve each in its own docker container. These models
contain both the underlying embedding module and the classifier.
Option 2:
Serve the module
itself, and have its output go to 2 different served models
, that themselves do not contain the embeddings. (Is this even possible?)
My computer science background tells me that option 2 is better, since we are re-using the original embeddings module for both models, also decoupling the models themselves from the embeddings module.
However, from a practical standpoint, when a data scientist is coding, they are importing the module
and training with the classifier on top of it, so it becomes cumbersome having to export the model
itself without the underlying embeddings.
Can anyone point me in the right direction? Hopefully my question makes sense, I am not a data scientist myself, I am coming more from a development background.
Thanks
tensorflow tensorflow-serving tensorflow-hub
tensorflow tensorflow-serving tensorflow-hub
asked Nov 21 '18 at 20:43
TiberiuTiberiu
177316
177316
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.
Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53420194%2ftensorflow-hub-module-reuse%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.
Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.
add a comment |
Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.
Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.
add a comment |
Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.
Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.
Putting a classifier on top of an embedding module creates a fairly strong dependency: the classifier must be trained to the particular embedding space. Unless you make very special arrangements, just swapping in another embedding module won't work. So Option 1 is quite good: it yields two models that can be served and updated independently. They have some overlap, akin to two statically linked programs using the same library, but the source code is still modular: using Hub embedding modules through their common signature makes them interchangeable.
Option 2, by comparison, gives you three moving parts with non-trivial dependencies. If your goal is simplicity, I wouldn't go there.
answered Nov 27 '18 at 10:08
arnoegwarnoegw
24111
24111
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53420194%2ftensorflow-hub-module-reuse%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown