tensorflow-serving signature for an XOR












1














I am trying to export my first xor NN using tensorflow serving but I am not getting any result when I call the gRPC.
Here the code I use to predict the XOR



import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
K.set_learning_phase(0) # all new operations will be in test mode from now on

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
import numpy as np

model_version = "2" #Change this to export different model versions, i.e. 2, ..., 7
epoch = 100 ## the higher this number is the more accurate the prediction will be 10000 is a good number to s
et it at just takes a while to train

#Exhaustion of Different Possibilities
X = np.array([
[0,0],
[0,1],
[1,0],
[1,1]
])

#Return values of the different inputs
Y = np.array([[0],[1],[1],[0]])

#Create Model
model = Sequential()
model.add(Dense(8, input_dim=2))
model.add(Activation('tanh'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
sgd = SGD(lr=0.1)

model.compile(loss='binary_crossentropy', optimizer=sgd)
model.fit(X, Y, batch_size=1, nb_epoch=epoch)

test = np.array([[0.0,0.0]])

#setting values for the sake of saving the model in the proper format
x = model.input
y = model.output

print('Results of Model', model.predict_proba(X))

prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs": x}, {"prediction":
y})

valid_prediction_signature = tf.saved_model.signature_def_utils.is_valid_signature(prediction_signature)
if(valid_prediction_signature == False):
raise ValueError("Error: Prediction signature not valid!")

builder = saved_model_builder.SavedModelBuilder('./'+model_version)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

# Add the meta_graph and the variables to the builder
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature,
},
legacy_init_op=legacy_init_op)

# save the graph
builder.save()


Implement in docker



docker run -p 8501:8501 --mount type=bind,source=/root/tensorflow3/projects/example/xor_keras_tensorflow_serving,target=/models/xor -e MODEL_NAME=xor -t tensorflow/serving &


then I request the prediction by the following:



curl -d '{"inputs": [1,1]}' -X POST http://localhost:8501/v2/models/xor


the result is always this



<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
</BODY></HTML>


Can you help me to find where I wrong?
I have tried to change "inputs" in the curl with "instances", but nothing
Thanks,
Manuel










share|improve this question



























    1














    I am trying to export my first xor NN using tensorflow serving but I am not getting any result when I call the gRPC.
    Here the code I use to predict the XOR



    import tensorflow as tf
    sess = tf.Session()
    from keras import backend as K
    K.set_session(sess)
    K.set_learning_phase(0) # all new operations will be in test mode from now on

    from tensorflow.python.saved_model import builder as saved_model_builder
    from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl

    from keras.models import Sequential
    from keras.layers.core import Dense, Dropout, Activation
    from keras.optimizers import SGD
    import numpy as np

    model_version = "2" #Change this to export different model versions, i.e. 2, ..., 7
    epoch = 100 ## the higher this number is the more accurate the prediction will be 10000 is a good number to s
    et it at just takes a while to train

    #Exhaustion of Different Possibilities
    X = np.array([
    [0,0],
    [0,1],
    [1,0],
    [1,1]
    ])

    #Return values of the different inputs
    Y = np.array([[0],[1],[1],[0]])

    #Create Model
    model = Sequential()
    model.add(Dense(8, input_dim=2))
    model.add(Activation('tanh'))
    model.add(Dense(1))
    model.add(Activation('sigmoid'))
    sgd = SGD(lr=0.1)

    model.compile(loss='binary_crossentropy', optimizer=sgd)
    model.fit(X, Y, batch_size=1, nb_epoch=epoch)

    test = np.array([[0.0,0.0]])

    #setting values for the sake of saving the model in the proper format
    x = model.input
    y = model.output

    print('Results of Model', model.predict_proba(X))

    prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs": x}, {"prediction":
    y})

    valid_prediction_signature = tf.saved_model.signature_def_utils.is_valid_signature(prediction_signature)
    if(valid_prediction_signature == False):
    raise ValueError("Error: Prediction signature not valid!")

    builder = saved_model_builder.SavedModelBuilder('./'+model_version)
    legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

    # Add the meta_graph and the variables to the builder
    builder.add_meta_graph_and_variables(
    sess, [tag_constants.SERVING],
    signature_def_map={
    signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature,
    },
    legacy_init_op=legacy_init_op)

    # save the graph
    builder.save()


    Implement in docker



    docker run -p 8501:8501 --mount type=bind,source=/root/tensorflow3/projects/example/xor_keras_tensorflow_serving,target=/models/xor -e MODEL_NAME=xor -t tensorflow/serving &


    then I request the prediction by the following:



    curl -d '{"inputs": [1,1]}' -X POST http://localhost:8501/v2/models/xor


    the result is always this



    <HTML><HEAD>
    <TITLE>404 Not Found</TITLE>
    </HEAD><BODY>
    <H1>Not Found</H1>
    </BODY></HTML>


    Can you help me to find where I wrong?
    I have tried to change "inputs" in the curl with "instances", but nothing
    Thanks,
    Manuel










    share|improve this question

























      1












      1








      1


      1





      I am trying to export my first xor NN using tensorflow serving but I am not getting any result when I call the gRPC.
      Here the code I use to predict the XOR



      import tensorflow as tf
      sess = tf.Session()
      from keras import backend as K
      K.set_session(sess)
      K.set_learning_phase(0) # all new operations will be in test mode from now on

      from tensorflow.python.saved_model import builder as saved_model_builder
      from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl

      from keras.models import Sequential
      from keras.layers.core import Dense, Dropout, Activation
      from keras.optimizers import SGD
      import numpy as np

      model_version = "2" #Change this to export different model versions, i.e. 2, ..., 7
      epoch = 100 ## the higher this number is the more accurate the prediction will be 10000 is a good number to s
      et it at just takes a while to train

      #Exhaustion of Different Possibilities
      X = np.array([
      [0,0],
      [0,1],
      [1,0],
      [1,1]
      ])

      #Return values of the different inputs
      Y = np.array([[0],[1],[1],[0]])

      #Create Model
      model = Sequential()
      model.add(Dense(8, input_dim=2))
      model.add(Activation('tanh'))
      model.add(Dense(1))
      model.add(Activation('sigmoid'))
      sgd = SGD(lr=0.1)

      model.compile(loss='binary_crossentropy', optimizer=sgd)
      model.fit(X, Y, batch_size=1, nb_epoch=epoch)

      test = np.array([[0.0,0.0]])

      #setting values for the sake of saving the model in the proper format
      x = model.input
      y = model.output

      print('Results of Model', model.predict_proba(X))

      prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs": x}, {"prediction":
      y})

      valid_prediction_signature = tf.saved_model.signature_def_utils.is_valid_signature(prediction_signature)
      if(valid_prediction_signature == False):
      raise ValueError("Error: Prediction signature not valid!")

      builder = saved_model_builder.SavedModelBuilder('./'+model_version)
      legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

      # Add the meta_graph and the variables to the builder
      builder.add_meta_graph_and_variables(
      sess, [tag_constants.SERVING],
      signature_def_map={
      signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature,
      },
      legacy_init_op=legacy_init_op)

      # save the graph
      builder.save()


      Implement in docker



      docker run -p 8501:8501 --mount type=bind,source=/root/tensorflow3/projects/example/xor_keras_tensorflow_serving,target=/models/xor -e MODEL_NAME=xor -t tensorflow/serving &


      then I request the prediction by the following:



      curl -d '{"inputs": [1,1]}' -X POST http://localhost:8501/v2/models/xor


      the result is always this



      <HTML><HEAD>
      <TITLE>404 Not Found</TITLE>
      </HEAD><BODY>
      <H1>Not Found</H1>
      </BODY></HTML>


      Can you help me to find where I wrong?
      I have tried to change "inputs" in the curl with "instances", but nothing
      Thanks,
      Manuel










      share|improve this question













      I am trying to export my first xor NN using tensorflow serving but I am not getting any result when I call the gRPC.
      Here the code I use to predict the XOR



      import tensorflow as tf
      sess = tf.Session()
      from keras import backend as K
      K.set_session(sess)
      K.set_learning_phase(0) # all new operations will be in test mode from now on

      from tensorflow.python.saved_model import builder as saved_model_builder
      from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl

      from keras.models import Sequential
      from keras.layers.core import Dense, Dropout, Activation
      from keras.optimizers import SGD
      import numpy as np

      model_version = "2" #Change this to export different model versions, i.e. 2, ..., 7
      epoch = 100 ## the higher this number is the more accurate the prediction will be 10000 is a good number to s
      et it at just takes a while to train

      #Exhaustion of Different Possibilities
      X = np.array([
      [0,0],
      [0,1],
      [1,0],
      [1,1]
      ])

      #Return values of the different inputs
      Y = np.array([[0],[1],[1],[0]])

      #Create Model
      model = Sequential()
      model.add(Dense(8, input_dim=2))
      model.add(Activation('tanh'))
      model.add(Dense(1))
      model.add(Activation('sigmoid'))
      sgd = SGD(lr=0.1)

      model.compile(loss='binary_crossentropy', optimizer=sgd)
      model.fit(X, Y, batch_size=1, nb_epoch=epoch)

      test = np.array([[0.0,0.0]])

      #setting values for the sake of saving the model in the proper format
      x = model.input
      y = model.output

      print('Results of Model', model.predict_proba(X))

      prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs": x}, {"prediction":
      y})

      valid_prediction_signature = tf.saved_model.signature_def_utils.is_valid_signature(prediction_signature)
      if(valid_prediction_signature == False):
      raise ValueError("Error: Prediction signature not valid!")

      builder = saved_model_builder.SavedModelBuilder('./'+model_version)
      legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

      # Add the meta_graph and the variables to the builder
      builder.add_meta_graph_and_variables(
      sess, [tag_constants.SERVING],
      signature_def_map={
      signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature,
      },
      legacy_init_op=legacy_init_op)

      # save the graph
      builder.save()


      Implement in docker



      docker run -p 8501:8501 --mount type=bind,source=/root/tensorflow3/projects/example/xor_keras_tensorflow_serving,target=/models/xor -e MODEL_NAME=xor -t tensorflow/serving &


      then I request the prediction by the following:



      curl -d '{"inputs": [1,1]}' -X POST http://localhost:8501/v2/models/xor


      the result is always this



      <HTML><HEAD>
      <TITLE>404 Not Found</TITLE>
      </HEAD><BODY>
      <H1>Not Found</H1>
      </BODY></HTML>


      Can you help me to find where I wrong?
      I have tried to change "inputs" in the curl with "instances", but nothing
      Thanks,
      Manuel







      tensorflow tensorflow-serving






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 19 '18 at 18:09









      Manuel JanManuel Jan

      84




      84
























          2 Answers
          2






          active

          oldest

          votes


















          0














          Can you first try




          curl http://localhost:8501/v1/models/xor




          to check if the model is running? This should return the status of your model.



          From RESTful API doc, the format is GET http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]






          share|improve this answer































            0














            Thanks! you got the point! so I solved.



            Actually there were 2 errors to the curl command:




            1. localhost:8501/v1/models/xor I put v2 thinking to use the version #2, but if you put v2 it does not work. It seems v# it is not the versiono of your saved model

            2. Also I need to specify the predict so the exact request is: http://localhost:8501/v1/models/xor:predict






            share|improve this answer





















            • glad to help. just edited my answer about model version.
              – Yiding
              Nov 22 '18 at 16:09










            • Thanks Yiding, very appreciated!
              – Manuel Jan
              Nov 22 '18 at 17:16











            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380386%2ftensorflow-serving-signature-for-an-xor%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            Can you first try




            curl http://localhost:8501/v1/models/xor




            to check if the model is running? This should return the status of your model.



            From RESTful API doc, the format is GET http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]






            share|improve this answer




























              0














              Can you first try




              curl http://localhost:8501/v1/models/xor




              to check if the model is running? This should return the status of your model.



              From RESTful API doc, the format is GET http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]






              share|improve this answer


























                0












                0








                0






                Can you first try




                curl http://localhost:8501/v1/models/xor




                to check if the model is running? This should return the status of your model.



                From RESTful API doc, the format is GET http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]






                share|improve this answer














                Can you first try




                curl http://localhost:8501/v1/models/xor




                to check if the model is running? This should return the status of your model.



                From RESTful API doc, the format is GET http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 22 '18 at 16:08

























                answered Nov 21 '18 at 4:41









                YidingYiding

                766




                766

























                    0














                    Thanks! you got the point! so I solved.



                    Actually there were 2 errors to the curl command:




                    1. localhost:8501/v1/models/xor I put v2 thinking to use the version #2, but if you put v2 it does not work. It seems v# it is not the versiono of your saved model

                    2. Also I need to specify the predict so the exact request is: http://localhost:8501/v1/models/xor:predict






                    share|improve this answer





















                    • glad to help. just edited my answer about model version.
                      – Yiding
                      Nov 22 '18 at 16:09










                    • Thanks Yiding, very appreciated!
                      – Manuel Jan
                      Nov 22 '18 at 17:16
















                    0














                    Thanks! you got the point! so I solved.



                    Actually there were 2 errors to the curl command:




                    1. localhost:8501/v1/models/xor I put v2 thinking to use the version #2, but if you put v2 it does not work. It seems v# it is not the versiono of your saved model

                    2. Also I need to specify the predict so the exact request is: http://localhost:8501/v1/models/xor:predict






                    share|improve this answer





















                    • glad to help. just edited my answer about model version.
                      – Yiding
                      Nov 22 '18 at 16:09










                    • Thanks Yiding, very appreciated!
                      – Manuel Jan
                      Nov 22 '18 at 17:16














                    0












                    0








                    0






                    Thanks! you got the point! so I solved.



                    Actually there were 2 errors to the curl command:




                    1. localhost:8501/v1/models/xor I put v2 thinking to use the version #2, but if you put v2 it does not work. It seems v# it is not the versiono of your saved model

                    2. Also I need to specify the predict so the exact request is: http://localhost:8501/v1/models/xor:predict






                    share|improve this answer












                    Thanks! you got the point! so I solved.



                    Actually there were 2 errors to the curl command:




                    1. localhost:8501/v1/models/xor I put v2 thinking to use the version #2, but if you put v2 it does not work. It seems v# it is not the versiono of your saved model

                    2. Also I need to specify the predict so the exact request is: http://localhost:8501/v1/models/xor:predict







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Nov 22 '18 at 14:34









                    Manuel JanManuel Jan

                    84




                    84












                    • glad to help. just edited my answer about model version.
                      – Yiding
                      Nov 22 '18 at 16:09










                    • Thanks Yiding, very appreciated!
                      – Manuel Jan
                      Nov 22 '18 at 17:16


















                    • glad to help. just edited my answer about model version.
                      – Yiding
                      Nov 22 '18 at 16:09










                    • Thanks Yiding, very appreciated!
                      – Manuel Jan
                      Nov 22 '18 at 17:16
















                    glad to help. just edited my answer about model version.
                    – Yiding
                    Nov 22 '18 at 16:09




                    glad to help. just edited my answer about model version.
                    – Yiding
                    Nov 22 '18 at 16:09












                    Thanks Yiding, very appreciated!
                    – Manuel Jan
                    Nov 22 '18 at 17:16




                    Thanks Yiding, very appreciated!
                    – Manuel Jan
                    Nov 22 '18 at 17:16


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380386%2ftensorflow-serving-signature-for-an-xor%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    MongoDB - Not Authorized To Execute Command

                    in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

                    How to fix TextFormField cause rebuild widget in Flutter