Epochs and batches control in Keras
I would like to implement an autoencoder model that acts as following:
for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping
I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?
Which is the best practice in this case? Do you have an example or a similar question to link me?
python keras training-data
add a comment |
I would like to implement an autoencoder model that acts as following:
for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping
I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?
Which is the best practice in this case? Do you have an example or a similar question to link me?
python keras training-data
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 '18 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 '18 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 '18 at 16:30
add a comment |
I would like to implement an autoencoder model that acts as following:
for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping
I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?
Which is the best practice in this case? Do you have an example or a similar question to link me?
python keras training-data
I would like to implement an autoencoder model that acts as following:
for epoch in xrange(100):
for X_batch in batch_list:
model.train_on_batch(X_batch, X_batch)
training_error = model.evaluate(X_batch, X_batch, verbose=0)
average the training error by the number of the batches considered
save it as the epoch training error
call the function to get the validation error in the same fashion over the validation data
compare the two errors and decide whether go on training or stopping
I have looked on Internet and already asked something and it has been suggested me to use fit_generator, but I have not understand how to implement it. Or should I use the method train_on_batch or the fit one with number of epochs equal to 1 to proper fit the model?
Which is the best practice in this case? Do you have an example or a similar question to link me?
python keras training-data
python keras training-data
edited Nov 19 '18 at 20:19


halfer
14.3k758109
14.3k758109
asked Nov 19 '18 at 16:04
Guido
167
167
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 '18 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 '18 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 '18 at 16:30
add a comment |
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 '18 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 '18 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 '18 at 16:30
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 '18 at 16:20
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 '18 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 '18 at 16:28
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 '18 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 '18 at 16:30
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 '18 at 16:30
add a comment |
1 Answer
1
active
oldest
votes
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53378515%2fepochs-and-batches-control-in-keras%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
add a comment |
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
add a comment |
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
From what I can understand, you want to use validation error as early stopping criteria. Good news is that keras already has early stopping callback. SO all you need is to create a callback and call it during training after some epochs/iterations.
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
Let us look at train_on_batch and fit()
train_on_batch(x, y, sample_weight=None, class_weight=None)
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
You can see that train_on_batch doesn't take any callback as input, so good choice is to use fit here, unless you want to implement it by yourself.
Now you can call fit as following
callbacks = [EarlyStopping(monitor='val_loss', patience=2),
ModelCheckpoint(filepath='path to latest ckpt', monitor='val_loss', save_best_only=True)]
history = model.fit(train_features,train_target, epochs=num_epochs, callbacks=callbacks, verbose=0, batch_size=your_choice, validation_data)
answered Nov 19 '18 at 16:49


Garvita Tiwari
453211
453211
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53378515%2fepochs-and-batches-control-in-keras%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Can you explain the first paragraph again? Do you mean that you you want to stop or continue training on the basis of validation error? Something like early stopping?
– Garvita Tiwari
Nov 19 '18 at 16:20
I have updated with your suggestions, thank you. Yes exactly. But my problem is actually before: is it correct to use train_on_batch? Or should I use fit? Or fit.generator? I do not find any exhaustive example on Internet and I am doing by chance.
– Guido
Nov 19 '18 at 16:28
I have just written as a pseudo code, I have still not implemented, I am wondering about what should I use
– Guido
Nov 19 '18 at 16:30