Restore a saved neural network in Tensorflow












1















Before marking my question as duplicate, I want you to understand that I have went through a lot of questions, but none of the solutions there were able to clear my doubts and solve my problem. I have a trained neural network which I want to save, and later use this model to test this model against test dataset.



I tried saving and restoring it, but I am not getting the expected results. Restoring doesn't seem to work, maybe I am using it wrongly, it is just using the values given by the global variable initializer.



This is the code I am using for saving the model.



 sess.run(tf.initializers.global_variables())
#num_epochs = 7
for epoch in range(num_epochs):
start_time = time.time()
train_accuracy = 0
train_loss = 0
val_loss = 0
val_accuracy = 0

for bid in range(int(train_data_size/batch_size)):
X_train_batch = X_train[bid*batch_size:(bid+1)*batch_size]
y_train_batch = y_train[bid*batch_size:(bid+1)*batch_size]
sess.run(optimizer, feed_dict = {x:X_train_batch, y:y_train_batch,prob:0.50})

train_accuracy = train_accuracy + sess.run(model_accuracy, feed_dict={x : X_train_batch,y:y_train_batch,prob:0.50})
train_loss = train_loss + sess.run(loss_value, feed_dict={x : X_train_batch,y:y_train_batch,prob:0.50})

for bid in range(int(val_data_size/batch_size)):
X_val_batch = X_val[bid*batch_size:(bid+1)*batch_size]
y_val_batch = y_val[bid*batch_size:(bid+1)*batch_size]
val_accuracy = val_accuracy + sess.run(model_accuracy,feed_dict = {x:X_val_batch, y:y_val_batch,prob:0.75})
val_loss = val_loss + sess.run(loss_value, feed_dict = {x:X_val_batch, y:y_val_batch,prob:0.75})

train_accuracy = train_accuracy/int(train_data_size/batch_size)
val_accuracy = val_accuracy/int(val_data_size/batch_size)
train_loss = train_loss/int(train_data_size/batch_size)
val_loss = val_loss/int(val_data_size/batch_size)


end_time = time.time()


saver.save(sess,'./blood_model_x_v2',global_step = epoch)


After saving the model, the files are written in my working directory something like this.



blood_model_x_v2-2.data-0000-of-0001

blood_model_x_v2-2.index

blood_model_x_v2-2.meta



Similarly, v2-3, so on to v2-6, and then a 'checkpoint' file. I then tried restoring it using this code snippet (after initializing),but getting different results from the expected one. What am I doing wrong ?



saver = tf.train.import_meta_graph('blood_model_x_v2-5.meta')
saver.restore(test_session,tf.train.latest_checkpoint('./'))









share|improve this question























  • What did you expect and what did happen?

    – Amir
    Jan 2 at 11:56











  • After I trained the model, I saved the model and tested it. Got an accuracy of around 50% but when I create a new session, and restore the model and test it, I get around 20 - 25% accuracy (my problem has 4 classes). I expect a similar accuracy as I am getting while testing.

    – Amruth Lakkavaram
    Jan 2 at 11:59













  • The way you restore the model is ok but notice that the variables to restore do not have to have been initialized, as restoring is itself a way to initialize variables.

    – Amir
    Jan 2 at 12:14











  • That gives me 'Attempt to use uninitialized value' error.

    – Amruth Lakkavaram
    Jan 2 at 12:19











  • The problem is here! you initialize the model randomly and not load checkpoints at all.

    – Amir
    Jan 2 at 12:22
















1















Before marking my question as duplicate, I want you to understand that I have went through a lot of questions, but none of the solutions there were able to clear my doubts and solve my problem. I have a trained neural network which I want to save, and later use this model to test this model against test dataset.



I tried saving and restoring it, but I am not getting the expected results. Restoring doesn't seem to work, maybe I am using it wrongly, it is just using the values given by the global variable initializer.



This is the code I am using for saving the model.



 sess.run(tf.initializers.global_variables())
#num_epochs = 7
for epoch in range(num_epochs):
start_time = time.time()
train_accuracy = 0
train_loss = 0
val_loss = 0
val_accuracy = 0

for bid in range(int(train_data_size/batch_size)):
X_train_batch = X_train[bid*batch_size:(bid+1)*batch_size]
y_train_batch = y_train[bid*batch_size:(bid+1)*batch_size]
sess.run(optimizer, feed_dict = {x:X_train_batch, y:y_train_batch,prob:0.50})

train_accuracy = train_accuracy + sess.run(model_accuracy, feed_dict={x : X_train_batch,y:y_train_batch,prob:0.50})
train_loss = train_loss + sess.run(loss_value, feed_dict={x : X_train_batch,y:y_train_batch,prob:0.50})

for bid in range(int(val_data_size/batch_size)):
X_val_batch = X_val[bid*batch_size:(bid+1)*batch_size]
y_val_batch = y_val[bid*batch_size:(bid+1)*batch_size]
val_accuracy = val_accuracy + sess.run(model_accuracy,feed_dict = {x:X_val_batch, y:y_val_batch,prob:0.75})
val_loss = val_loss + sess.run(loss_value, feed_dict = {x:X_val_batch, y:y_val_batch,prob:0.75})

train_accuracy = train_accuracy/int(train_data_size/batch_size)
val_accuracy = val_accuracy/int(val_data_size/batch_size)
train_loss = train_loss/int(train_data_size/batch_size)
val_loss = val_loss/int(val_data_size/batch_size)


end_time = time.time()


saver.save(sess,'./blood_model_x_v2',global_step = epoch)


After saving the model, the files are written in my working directory something like this.



blood_model_x_v2-2.data-0000-of-0001

blood_model_x_v2-2.index

blood_model_x_v2-2.meta



Similarly, v2-3, so on to v2-6, and then a 'checkpoint' file. I then tried restoring it using this code snippet (after initializing),but getting different results from the expected one. What am I doing wrong ?



saver = tf.train.import_meta_graph('blood_model_x_v2-5.meta')
saver.restore(test_session,tf.train.latest_checkpoint('./'))









share|improve this question























  • What did you expect and what did happen?

    – Amir
    Jan 2 at 11:56











  • After I trained the model, I saved the model and tested it. Got an accuracy of around 50% but when I create a new session, and restore the model and test it, I get around 20 - 25% accuracy (my problem has 4 classes). I expect a similar accuracy as I am getting while testing.

    – Amruth Lakkavaram
    Jan 2 at 11:59













  • The way you restore the model is ok but notice that the variables to restore do not have to have been initialized, as restoring is itself a way to initialize variables.

    – Amir
    Jan 2 at 12:14











  • That gives me 'Attempt to use uninitialized value' error.

    – Amruth Lakkavaram
    Jan 2 at 12:19











  • The problem is here! you initialize the model randomly and not load checkpoints at all.

    – Amir
    Jan 2 at 12:22














1












1








1








Before marking my question as duplicate, I want you to understand that I have went through a lot of questions, but none of the solutions there were able to clear my doubts and solve my problem. I have a trained neural network which I want to save, and later use this model to test this model against test dataset.



I tried saving and restoring it, but I am not getting the expected results. Restoring doesn't seem to work, maybe I am using it wrongly, it is just using the values given by the global variable initializer.



This is the code I am using for saving the model.



 sess.run(tf.initializers.global_variables())
#num_epochs = 7
for epoch in range(num_epochs):
start_time = time.time()
train_accuracy = 0
train_loss = 0
val_loss = 0
val_accuracy = 0

for bid in range(int(train_data_size/batch_size)):
X_train_batch = X_train[bid*batch_size:(bid+1)*batch_size]
y_train_batch = y_train[bid*batch_size:(bid+1)*batch_size]
sess.run(optimizer, feed_dict = {x:X_train_batch, y:y_train_batch,prob:0.50})

train_accuracy = train_accuracy + sess.run(model_accuracy, feed_dict={x : X_train_batch,y:y_train_batch,prob:0.50})
train_loss = train_loss + sess.run(loss_value, feed_dict={x : X_train_batch,y:y_train_batch,prob:0.50})

for bid in range(int(val_data_size/batch_size)):
X_val_batch = X_val[bid*batch_size:(bid+1)*batch_size]
y_val_batch = y_val[bid*batch_size:(bid+1)*batch_size]
val_accuracy = val_accuracy + sess.run(model_accuracy,feed_dict = {x:X_val_batch, y:y_val_batch,prob:0.75})
val_loss = val_loss + sess.run(loss_value, feed_dict = {x:X_val_batch, y:y_val_batch,prob:0.75})

train_accuracy = train_accuracy/int(train_data_size/batch_size)
val_accuracy = val_accuracy/int(val_data_size/batch_size)
train_loss = train_loss/int(train_data_size/batch_size)
val_loss = val_loss/int(val_data_size/batch_size)


end_time = time.time()


saver.save(sess,'./blood_model_x_v2',global_step = epoch)


After saving the model, the files are written in my working directory something like this.



blood_model_x_v2-2.data-0000-of-0001

blood_model_x_v2-2.index

blood_model_x_v2-2.meta



Similarly, v2-3, so on to v2-6, and then a 'checkpoint' file. I then tried restoring it using this code snippet (after initializing),but getting different results from the expected one. What am I doing wrong ?



saver = tf.train.import_meta_graph('blood_model_x_v2-5.meta')
saver.restore(test_session,tf.train.latest_checkpoint('./'))









share|improve this question














Before marking my question as duplicate, I want you to understand that I have went through a lot of questions, but none of the solutions there were able to clear my doubts and solve my problem. I have a trained neural network which I want to save, and later use this model to test this model against test dataset.



I tried saving and restoring it, but I am not getting the expected results. Restoring doesn't seem to work, maybe I am using it wrongly, it is just using the values given by the global variable initializer.



This is the code I am using for saving the model.



 sess.run(tf.initializers.global_variables())
#num_epochs = 7
for epoch in range(num_epochs):
start_time = time.time()
train_accuracy = 0
train_loss = 0
val_loss = 0
val_accuracy = 0

for bid in range(int(train_data_size/batch_size)):
X_train_batch = X_train[bid*batch_size:(bid+1)*batch_size]
y_train_batch = y_train[bid*batch_size:(bid+1)*batch_size]
sess.run(optimizer, feed_dict = {x:X_train_batch, y:y_train_batch,prob:0.50})

train_accuracy = train_accuracy + sess.run(model_accuracy, feed_dict={x : X_train_batch,y:y_train_batch,prob:0.50})
train_loss = train_loss + sess.run(loss_value, feed_dict={x : X_train_batch,y:y_train_batch,prob:0.50})

for bid in range(int(val_data_size/batch_size)):
X_val_batch = X_val[bid*batch_size:(bid+1)*batch_size]
y_val_batch = y_val[bid*batch_size:(bid+1)*batch_size]
val_accuracy = val_accuracy + sess.run(model_accuracy,feed_dict = {x:X_val_batch, y:y_val_batch,prob:0.75})
val_loss = val_loss + sess.run(loss_value, feed_dict = {x:X_val_batch, y:y_val_batch,prob:0.75})

train_accuracy = train_accuracy/int(train_data_size/batch_size)
val_accuracy = val_accuracy/int(val_data_size/batch_size)
train_loss = train_loss/int(train_data_size/batch_size)
val_loss = val_loss/int(val_data_size/batch_size)


end_time = time.time()


saver.save(sess,'./blood_model_x_v2',global_step = epoch)


After saving the model, the files are written in my working directory something like this.



blood_model_x_v2-2.data-0000-of-0001

blood_model_x_v2-2.index

blood_model_x_v2-2.meta



Similarly, v2-3, so on to v2-6, and then a 'checkpoint' file. I then tried restoring it using this code snippet (after initializing),but getting different results from the expected one. What am I doing wrong ?



saver = tf.train.import_meta_graph('blood_model_x_v2-5.meta')
saver.restore(test_session,tf.train.latest_checkpoint('./'))






python tensorflow






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jan 2 at 11:52









Amruth LakkavaramAmruth Lakkavaram

11719




11719













  • What did you expect and what did happen?

    – Amir
    Jan 2 at 11:56











  • After I trained the model, I saved the model and tested it. Got an accuracy of around 50% but when I create a new session, and restore the model and test it, I get around 20 - 25% accuracy (my problem has 4 classes). I expect a similar accuracy as I am getting while testing.

    – Amruth Lakkavaram
    Jan 2 at 11:59













  • The way you restore the model is ok but notice that the variables to restore do not have to have been initialized, as restoring is itself a way to initialize variables.

    – Amir
    Jan 2 at 12:14











  • That gives me 'Attempt to use uninitialized value' error.

    – Amruth Lakkavaram
    Jan 2 at 12:19











  • The problem is here! you initialize the model randomly and not load checkpoints at all.

    – Amir
    Jan 2 at 12:22



















  • What did you expect and what did happen?

    – Amir
    Jan 2 at 11:56











  • After I trained the model, I saved the model and tested it. Got an accuracy of around 50% but when I create a new session, and restore the model and test it, I get around 20 - 25% accuracy (my problem has 4 classes). I expect a similar accuracy as I am getting while testing.

    – Amruth Lakkavaram
    Jan 2 at 11:59













  • The way you restore the model is ok but notice that the variables to restore do not have to have been initialized, as restoring is itself a way to initialize variables.

    – Amir
    Jan 2 at 12:14











  • That gives me 'Attempt to use uninitialized value' error.

    – Amruth Lakkavaram
    Jan 2 at 12:19











  • The problem is here! you initialize the model randomly and not load checkpoints at all.

    – Amir
    Jan 2 at 12:22

















What did you expect and what did happen?

– Amir
Jan 2 at 11:56





What did you expect and what did happen?

– Amir
Jan 2 at 11:56













After I trained the model, I saved the model and tested it. Got an accuracy of around 50% but when I create a new session, and restore the model and test it, I get around 20 - 25% accuracy (my problem has 4 classes). I expect a similar accuracy as I am getting while testing.

– Amruth Lakkavaram
Jan 2 at 11:59







After I trained the model, I saved the model and tested it. Got an accuracy of around 50% but when I create a new session, and restore the model and test it, I get around 20 - 25% accuracy (my problem has 4 classes). I expect a similar accuracy as I am getting while testing.

– Amruth Lakkavaram
Jan 2 at 11:59















The way you restore the model is ok but notice that the variables to restore do not have to have been initialized, as restoring is itself a way to initialize variables.

– Amir
Jan 2 at 12:14





The way you restore the model is ok but notice that the variables to restore do not have to have been initialized, as restoring is itself a way to initialize variables.

– Amir
Jan 2 at 12:14













That gives me 'Attempt to use uninitialized value' error.

– Amruth Lakkavaram
Jan 2 at 12:19





That gives me 'Attempt to use uninitialized value' error.

– Amruth Lakkavaram
Jan 2 at 12:19













The problem is here! you initialize the model randomly and not load checkpoints at all.

– Amir
Jan 2 at 12:22





The problem is here! you initialize the model randomly and not load checkpoints at all.

– Amir
Jan 2 at 12:22












1 Answer
1






active

oldest

votes


















2














According to tensorflow docs:




Restore
Restores previously saved variables.



This method runs the ops added by the constructor for restoring
variables. It requires a session in which the graph was launched. The
variables to restore do not have to have been initialized, as
restoring is itself a way to initialize variables.




Let's see an example:



We save the model similar to this:



import tensorflow as tf

# Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1 = tf.Variable(2.0, name="bias")
feed_dict = {w1: 4, w2: 8}

# Define a test operation that we will restore
w3 = tf.add(w1, w2)
w4 = tf.multiply(w3, b1, name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())

# Create a saver object which will save all the variables
saver = tf.train.Saver()

# Run the operation by feeding input
print (sess.run(w4, feed_dict))
# Prints 24 which is sum of (w1+w2)*b1

# Now, save the graph
saver.save(sess, './ckpnt/my_test_model', global_step=1000)


And then load the trained model with:



import tensorflow as tf

sess = tf.Session()
# First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('./ckpnt/my_test_model-1000.meta')
saver.restore(sess, tf.train.latest_checkpoint('./ckpnt'))

# Now, let's access and create placeholders variables and
# create feed-dict to feed new data

graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict = {w1: 13.0, w2: 17.0}

# Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")

print (sess.run(op_to_restore, feed_dict))
# This will print 60 which is calculated
# using new values of w1 and w2 and saved value of b1.


As you can see we do not initialize our session in the restoring part. There is better way to save and restore model with Checkpoint which allows you to check whether the model is restored correctly or not.






share|improve this answer
























  • I have nearly 12-14 hidden layers. Am I supposed to restore the weights of each layer by using tf.get_default_graph().get_tensor_by_name('w1:0') ?

    – Amruth Lakkavaram
    Jan 2 at 12:46











  • @AmruthLakkavaram graph.get_tensor_by_name("w1:0") is a placeholder. You usually have a few of them. As may notice we restore bias successfully.

    – Amir
    Jan 2 at 12:49








  • 1





    Thank you very much @Amir, that helps. What I was doing wrong was that I was not restoring the placeholders and operations to be restored, from the metagraph. I was freshly defining the operations and placeholders again, and I was running these freshly defined ops by feeding values in the freshly defined placeholders. That was the reason, I was getting 'Attempting to use uninitialized' error. So, when I was initializing using global_variable_initializer, I was not getting expected results.

    – Amruth Lakkavaram
    Jan 3 at 4:13











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54005871%2frestore-a-saved-neural-network-in-tensorflow%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














According to tensorflow docs:




Restore
Restores previously saved variables.



This method runs the ops added by the constructor for restoring
variables. It requires a session in which the graph was launched. The
variables to restore do not have to have been initialized, as
restoring is itself a way to initialize variables.




Let's see an example:



We save the model similar to this:



import tensorflow as tf

# Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1 = tf.Variable(2.0, name="bias")
feed_dict = {w1: 4, w2: 8}

# Define a test operation that we will restore
w3 = tf.add(w1, w2)
w4 = tf.multiply(w3, b1, name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())

# Create a saver object which will save all the variables
saver = tf.train.Saver()

# Run the operation by feeding input
print (sess.run(w4, feed_dict))
# Prints 24 which is sum of (w1+w2)*b1

# Now, save the graph
saver.save(sess, './ckpnt/my_test_model', global_step=1000)


And then load the trained model with:



import tensorflow as tf

sess = tf.Session()
# First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('./ckpnt/my_test_model-1000.meta')
saver.restore(sess, tf.train.latest_checkpoint('./ckpnt'))

# Now, let's access and create placeholders variables and
# create feed-dict to feed new data

graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict = {w1: 13.0, w2: 17.0}

# Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")

print (sess.run(op_to_restore, feed_dict))
# This will print 60 which is calculated
# using new values of w1 and w2 and saved value of b1.


As you can see we do not initialize our session in the restoring part. There is better way to save and restore model with Checkpoint which allows you to check whether the model is restored correctly or not.






share|improve this answer
























  • I have nearly 12-14 hidden layers. Am I supposed to restore the weights of each layer by using tf.get_default_graph().get_tensor_by_name('w1:0') ?

    – Amruth Lakkavaram
    Jan 2 at 12:46











  • @AmruthLakkavaram graph.get_tensor_by_name("w1:0") is a placeholder. You usually have a few of them. As may notice we restore bias successfully.

    – Amir
    Jan 2 at 12:49








  • 1





    Thank you very much @Amir, that helps. What I was doing wrong was that I was not restoring the placeholders and operations to be restored, from the metagraph. I was freshly defining the operations and placeholders again, and I was running these freshly defined ops by feeding values in the freshly defined placeholders. That was the reason, I was getting 'Attempting to use uninitialized' error. So, when I was initializing using global_variable_initializer, I was not getting expected results.

    – Amruth Lakkavaram
    Jan 3 at 4:13
















2














According to tensorflow docs:




Restore
Restores previously saved variables.



This method runs the ops added by the constructor for restoring
variables. It requires a session in which the graph was launched. The
variables to restore do not have to have been initialized, as
restoring is itself a way to initialize variables.




Let's see an example:



We save the model similar to this:



import tensorflow as tf

# Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1 = tf.Variable(2.0, name="bias")
feed_dict = {w1: 4, w2: 8}

# Define a test operation that we will restore
w3 = tf.add(w1, w2)
w4 = tf.multiply(w3, b1, name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())

# Create a saver object which will save all the variables
saver = tf.train.Saver()

# Run the operation by feeding input
print (sess.run(w4, feed_dict))
# Prints 24 which is sum of (w1+w2)*b1

# Now, save the graph
saver.save(sess, './ckpnt/my_test_model', global_step=1000)


And then load the trained model with:



import tensorflow as tf

sess = tf.Session()
# First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('./ckpnt/my_test_model-1000.meta')
saver.restore(sess, tf.train.latest_checkpoint('./ckpnt'))

# Now, let's access and create placeholders variables and
# create feed-dict to feed new data

graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict = {w1: 13.0, w2: 17.0}

# Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")

print (sess.run(op_to_restore, feed_dict))
# This will print 60 which is calculated
# using new values of w1 and w2 and saved value of b1.


As you can see we do not initialize our session in the restoring part. There is better way to save and restore model with Checkpoint which allows you to check whether the model is restored correctly or not.






share|improve this answer
























  • I have nearly 12-14 hidden layers. Am I supposed to restore the weights of each layer by using tf.get_default_graph().get_tensor_by_name('w1:0') ?

    – Amruth Lakkavaram
    Jan 2 at 12:46











  • @AmruthLakkavaram graph.get_tensor_by_name("w1:0") is a placeholder. You usually have a few of them. As may notice we restore bias successfully.

    – Amir
    Jan 2 at 12:49








  • 1





    Thank you very much @Amir, that helps. What I was doing wrong was that I was not restoring the placeholders and operations to be restored, from the metagraph. I was freshly defining the operations and placeholders again, and I was running these freshly defined ops by feeding values in the freshly defined placeholders. That was the reason, I was getting 'Attempting to use uninitialized' error. So, when I was initializing using global_variable_initializer, I was not getting expected results.

    – Amruth Lakkavaram
    Jan 3 at 4:13














2












2








2







According to tensorflow docs:




Restore
Restores previously saved variables.



This method runs the ops added by the constructor for restoring
variables. It requires a session in which the graph was launched. The
variables to restore do not have to have been initialized, as
restoring is itself a way to initialize variables.




Let's see an example:



We save the model similar to this:



import tensorflow as tf

# Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1 = tf.Variable(2.0, name="bias")
feed_dict = {w1: 4, w2: 8}

# Define a test operation that we will restore
w3 = tf.add(w1, w2)
w4 = tf.multiply(w3, b1, name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())

# Create a saver object which will save all the variables
saver = tf.train.Saver()

# Run the operation by feeding input
print (sess.run(w4, feed_dict))
# Prints 24 which is sum of (w1+w2)*b1

# Now, save the graph
saver.save(sess, './ckpnt/my_test_model', global_step=1000)


And then load the trained model with:



import tensorflow as tf

sess = tf.Session()
# First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('./ckpnt/my_test_model-1000.meta')
saver.restore(sess, tf.train.latest_checkpoint('./ckpnt'))

# Now, let's access and create placeholders variables and
# create feed-dict to feed new data

graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict = {w1: 13.0, w2: 17.0}

# Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")

print (sess.run(op_to_restore, feed_dict))
# This will print 60 which is calculated
# using new values of w1 and w2 and saved value of b1.


As you can see we do not initialize our session in the restoring part. There is better way to save and restore model with Checkpoint which allows you to check whether the model is restored correctly or not.






share|improve this answer













According to tensorflow docs:




Restore
Restores previously saved variables.



This method runs the ops added by the constructor for restoring
variables. It requires a session in which the graph was launched. The
variables to restore do not have to have been initialized, as
restoring is itself a way to initialize variables.




Let's see an example:



We save the model similar to this:



import tensorflow as tf

# Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1 = tf.Variable(2.0, name="bias")
feed_dict = {w1: 4, w2: 8}

# Define a test operation that we will restore
w3 = tf.add(w1, w2)
w4 = tf.multiply(w3, b1, name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())

# Create a saver object which will save all the variables
saver = tf.train.Saver()

# Run the operation by feeding input
print (sess.run(w4, feed_dict))
# Prints 24 which is sum of (w1+w2)*b1

# Now, save the graph
saver.save(sess, './ckpnt/my_test_model', global_step=1000)


And then load the trained model with:



import tensorflow as tf

sess = tf.Session()
# First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('./ckpnt/my_test_model-1000.meta')
saver.restore(sess, tf.train.latest_checkpoint('./ckpnt'))

# Now, let's access and create placeholders variables and
# create feed-dict to feed new data

graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict = {w1: 13.0, w2: 17.0}

# Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")

print (sess.run(op_to_restore, feed_dict))
# This will print 60 which is calculated
# using new values of w1 and w2 and saved value of b1.


As you can see we do not initialize our session in the restoring part. There is better way to save and restore model with Checkpoint which allows you to check whether the model is restored correctly or not.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jan 2 at 12:36









AmirAmir

7,99274275




7,99274275













  • I have nearly 12-14 hidden layers. Am I supposed to restore the weights of each layer by using tf.get_default_graph().get_tensor_by_name('w1:0') ?

    – Amruth Lakkavaram
    Jan 2 at 12:46











  • @AmruthLakkavaram graph.get_tensor_by_name("w1:0") is a placeholder. You usually have a few of them. As may notice we restore bias successfully.

    – Amir
    Jan 2 at 12:49








  • 1





    Thank you very much @Amir, that helps. What I was doing wrong was that I was not restoring the placeholders and operations to be restored, from the metagraph. I was freshly defining the operations and placeholders again, and I was running these freshly defined ops by feeding values in the freshly defined placeholders. That was the reason, I was getting 'Attempting to use uninitialized' error. So, when I was initializing using global_variable_initializer, I was not getting expected results.

    – Amruth Lakkavaram
    Jan 3 at 4:13



















  • I have nearly 12-14 hidden layers. Am I supposed to restore the weights of each layer by using tf.get_default_graph().get_tensor_by_name('w1:0') ?

    – Amruth Lakkavaram
    Jan 2 at 12:46











  • @AmruthLakkavaram graph.get_tensor_by_name("w1:0") is a placeholder. You usually have a few of them. As may notice we restore bias successfully.

    – Amir
    Jan 2 at 12:49








  • 1





    Thank you very much @Amir, that helps. What I was doing wrong was that I was not restoring the placeholders and operations to be restored, from the metagraph. I was freshly defining the operations and placeholders again, and I was running these freshly defined ops by feeding values in the freshly defined placeholders. That was the reason, I was getting 'Attempting to use uninitialized' error. So, when I was initializing using global_variable_initializer, I was not getting expected results.

    – Amruth Lakkavaram
    Jan 3 at 4:13

















I have nearly 12-14 hidden layers. Am I supposed to restore the weights of each layer by using tf.get_default_graph().get_tensor_by_name('w1:0') ?

– Amruth Lakkavaram
Jan 2 at 12:46





I have nearly 12-14 hidden layers. Am I supposed to restore the weights of each layer by using tf.get_default_graph().get_tensor_by_name('w1:0') ?

– Amruth Lakkavaram
Jan 2 at 12:46













@AmruthLakkavaram graph.get_tensor_by_name("w1:0") is a placeholder. You usually have a few of them. As may notice we restore bias successfully.

– Amir
Jan 2 at 12:49







@AmruthLakkavaram graph.get_tensor_by_name("w1:0") is a placeholder. You usually have a few of them. As may notice we restore bias successfully.

– Amir
Jan 2 at 12:49






1




1





Thank you very much @Amir, that helps. What I was doing wrong was that I was not restoring the placeholders and operations to be restored, from the metagraph. I was freshly defining the operations and placeholders again, and I was running these freshly defined ops by feeding values in the freshly defined placeholders. That was the reason, I was getting 'Attempting to use uninitialized' error. So, when I was initializing using global_variable_initializer, I was not getting expected results.

– Amruth Lakkavaram
Jan 3 at 4:13





Thank you very much @Amir, that helps. What I was doing wrong was that I was not restoring the placeholders and operations to be restored, from the metagraph. I was freshly defining the operations and placeholders again, and I was running these freshly defined ops by feeding values in the freshly defined placeholders. That was the reason, I was getting 'Attempting to use uninitialized' error. So, when I was initializing using global_variable_initializer, I was not getting expected results.

– Amruth Lakkavaram
Jan 3 at 4:13




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54005871%2frestore-a-saved-neural-network-in-tensorflow%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

A Topological Invariant for $pi_3(U(n))$