When training a Keras model is there a way to have the GPU to take small breaks (sleeps) in between set...
My Keras model is currently looking at a lot of data and I personally don't feel comfortable letting my GPU reach 85 degrees...is there a way during a set amount epochs to tell my GPU to take a break?
I understand I could just break down the process into multiple training cycles but because I am using a ReduceLROnPlateau in my callback on an RNN model I would still like the entire training process to be done in one training cycle with the GPU taking small breaks to allow for longer training times with less risk to my personal hardware.
(Not adding code due to this just being a general question.)
python tensorflow machine-learning keras artificial-intelligence
add a comment |
My Keras model is currently looking at a lot of data and I personally don't feel comfortable letting my GPU reach 85 degrees...is there a way during a set amount epochs to tell my GPU to take a break?
I understand I could just break down the process into multiple training cycles but because I am using a ReduceLROnPlateau in my callback on an RNN model I would still like the entire training process to be done in one training cycle with the GPU taking small breaks to allow for longer training times with less risk to my personal hardware.
(Not adding code due to this just being a general question.)
python tensorflow machine-learning keras artificial-intelligence
2
The GPU is just fine at 85 degrees and it has overheating protection anyway.
– ivan_pozdeev
Jan 2 at 2:00
1
It looks like you could simply add aLambdaCallback
and defineon_epoch_begin
aslambda e, l: time.sleep(10) if e % 10 == 0 else None
or similar.
– Chinmay Kanchi
Jan 2 at 2:06
Usually, it is my friend told me how his own personal hit 85+ degrees in a 2 day period and crashed losing all his progress. So I am just worried and saving 20 minutes or so to save 7-12 hours is a pretty good deal in my book. Regardless thank you for reassurance. But I am interested in this LambdaCallback now...
– Eric C 0114
Jan 2 at 2:45
I just added e!=0 and it worked exactly as I needed thanks again!
– Eric C 0114
Jan 2 at 3:10
add a comment |
My Keras model is currently looking at a lot of data and I personally don't feel comfortable letting my GPU reach 85 degrees...is there a way during a set amount epochs to tell my GPU to take a break?
I understand I could just break down the process into multiple training cycles but because I am using a ReduceLROnPlateau in my callback on an RNN model I would still like the entire training process to be done in one training cycle with the GPU taking small breaks to allow for longer training times with less risk to my personal hardware.
(Not adding code due to this just being a general question.)
python tensorflow machine-learning keras artificial-intelligence
My Keras model is currently looking at a lot of data and I personally don't feel comfortable letting my GPU reach 85 degrees...is there a way during a set amount epochs to tell my GPU to take a break?
I understand I could just break down the process into multiple training cycles but because I am using a ReduceLROnPlateau in my callback on an RNN model I would still like the entire training process to be done in one training cycle with the GPU taking small breaks to allow for longer training times with less risk to my personal hardware.
(Not adding code due to this just being a general question.)
python tensorflow machine-learning keras artificial-intelligence
python tensorflow machine-learning keras artificial-intelligence
asked Jan 2 at 1:57
Eric C 0114Eric C 0114
206
206
2
The GPU is just fine at 85 degrees and it has overheating protection anyway.
– ivan_pozdeev
Jan 2 at 2:00
1
It looks like you could simply add aLambdaCallback
and defineon_epoch_begin
aslambda e, l: time.sleep(10) if e % 10 == 0 else None
or similar.
– Chinmay Kanchi
Jan 2 at 2:06
Usually, it is my friend told me how his own personal hit 85+ degrees in a 2 day period and crashed losing all his progress. So I am just worried and saving 20 minutes or so to save 7-12 hours is a pretty good deal in my book. Regardless thank you for reassurance. But I am interested in this LambdaCallback now...
– Eric C 0114
Jan 2 at 2:45
I just added e!=0 and it worked exactly as I needed thanks again!
– Eric C 0114
Jan 2 at 3:10
add a comment |
2
The GPU is just fine at 85 degrees and it has overheating protection anyway.
– ivan_pozdeev
Jan 2 at 2:00
1
It looks like you could simply add aLambdaCallback
and defineon_epoch_begin
aslambda e, l: time.sleep(10) if e % 10 == 0 else None
or similar.
– Chinmay Kanchi
Jan 2 at 2:06
Usually, it is my friend told me how his own personal hit 85+ degrees in a 2 day period and crashed losing all his progress. So I am just worried and saving 20 minutes or so to save 7-12 hours is a pretty good deal in my book. Regardless thank you for reassurance. But I am interested in this LambdaCallback now...
– Eric C 0114
Jan 2 at 2:45
I just added e!=0 and it worked exactly as I needed thanks again!
– Eric C 0114
Jan 2 at 3:10
2
2
The GPU is just fine at 85 degrees and it has overheating protection anyway.
– ivan_pozdeev
Jan 2 at 2:00
The GPU is just fine at 85 degrees and it has overheating protection anyway.
– ivan_pozdeev
Jan 2 at 2:00
1
1
It looks like you could simply add a
LambdaCallback
and define on_epoch_begin
as lambda e, l: time.sleep(10) if e % 10 == 0 else None
or similar.– Chinmay Kanchi
Jan 2 at 2:06
It looks like you could simply add a
LambdaCallback
and define on_epoch_begin
as lambda e, l: time.sleep(10) if e % 10 == 0 else None
or similar.– Chinmay Kanchi
Jan 2 at 2:06
Usually, it is my friend told me how his own personal hit 85+ degrees in a 2 day period and crashed losing all his progress. So I am just worried and saving 20 minutes or so to save 7-12 hours is a pretty good deal in my book. Regardless thank you for reassurance. But I am interested in this LambdaCallback now...
– Eric C 0114
Jan 2 at 2:45
Usually, it is my friend told me how his own personal hit 85+ degrees in a 2 day period and crashed losing all his progress. So I am just worried and saving 20 minutes or so to save 7-12 hours is a pretty good deal in my book. Regardless thank you for reassurance. But I am interested in this LambdaCallback now...
– Eric C 0114
Jan 2 at 2:45
I just added e!=0 and it worked exactly as I needed thanks again!
– Eric C 0114
Jan 2 at 3:10
I just added e!=0 and it worked exactly as I needed thanks again!
– Eric C 0114
Jan 2 at 3:10
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54000409%2fwhen-training-a-keras-model-is-there-a-way-to-have-the-gpu-to-take-small-breaks%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54000409%2fwhen-training-a-keras-model-is-there-a-way-to-have-the-gpu-to-take-small-breaks%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
The GPU is just fine at 85 degrees and it has overheating protection anyway.
– ivan_pozdeev
Jan 2 at 2:00
1
It looks like you could simply add a
LambdaCallback
and defineon_epoch_begin
aslambda e, l: time.sleep(10) if e % 10 == 0 else None
or similar.– Chinmay Kanchi
Jan 2 at 2:06
Usually, it is my friend told me how his own personal hit 85+ degrees in a 2 day period and crashed losing all his progress. So I am just worried and saving 20 minutes or so to save 7-12 hours is a pretty good deal in my book. Regardless thank you for reassurance. But I am interested in this LambdaCallback now...
– Eric C 0114
Jan 2 at 2:45
I just added e!=0 and it worked exactly as I needed thanks again!
– Eric C 0114
Jan 2 at 3:10