CNN training exceeds number of given cores in PBS












0














I'm using CNN called darknet/YOLO for deep learning on remote shared cluster with NVIDIA graphic cards. Remote cluster is linux with PBS job planning system.



I'm submitting job to train neural network on GPU, which works well.



Problem is in huge amount of consumed processors during the training. I usually submit a job with 8 processors, like this



qsub -q gpu select=1:ncpus=8:ngpus=1:mem=15gb:gpu_cap=cuda61


but it's always killed because of exceeded number of processors. Even tho I increase number to 20, its still exceeded.



I don't know why darknet consumes so many processors on the server, even tho i may run the job on my notebook with Intel i5 processor (which is slow and inefficient).



What I've tried:



1) Set cgroups=cpuacct which forces the job to NOT to use more processors then assigned, but it didn't work at all. Seem's like restriction works just in case server dont have resources for others. In the case there are free processors, the restriction doesnt work (https://drill.apache.org/docs/configuring-cgroups-to-control-cpu-usage/#cpu-limits)



2) Set place=excelhost which does not kill the job in case it exceed assigned resources. On the other side, it takes like 7 days to even start the job with this flag and I have to train network every day.



Question:



I don't need these processors and i don't understand why the darknet uses so many of them. How may i force the job to NOT exceed the given number of processors ? Or some other idea how could i solve this kind of problem ?










share|improve this question





























    0














    I'm using CNN called darknet/YOLO for deep learning on remote shared cluster with NVIDIA graphic cards. Remote cluster is linux with PBS job planning system.



    I'm submitting job to train neural network on GPU, which works well.



    Problem is in huge amount of consumed processors during the training. I usually submit a job with 8 processors, like this



    qsub -q gpu select=1:ncpus=8:ngpus=1:mem=15gb:gpu_cap=cuda61


    but it's always killed because of exceeded number of processors. Even tho I increase number to 20, its still exceeded.



    I don't know why darknet consumes so many processors on the server, even tho i may run the job on my notebook with Intel i5 processor (which is slow and inefficient).



    What I've tried:



    1) Set cgroups=cpuacct which forces the job to NOT to use more processors then assigned, but it didn't work at all. Seem's like restriction works just in case server dont have resources for others. In the case there are free processors, the restriction doesnt work (https://drill.apache.org/docs/configuring-cgroups-to-control-cpu-usage/#cpu-limits)



    2) Set place=excelhost which does not kill the job in case it exceed assigned resources. On the other side, it takes like 7 days to even start the job with this flag and I have to train network every day.



    Question:



    I don't need these processors and i don't understand why the darknet uses so many of them. How may i force the job to NOT exceed the given number of processors ? Or some other idea how could i solve this kind of problem ?










    share|improve this question



























      0












      0








      0







      I'm using CNN called darknet/YOLO for deep learning on remote shared cluster with NVIDIA graphic cards. Remote cluster is linux with PBS job planning system.



      I'm submitting job to train neural network on GPU, which works well.



      Problem is in huge amount of consumed processors during the training. I usually submit a job with 8 processors, like this



      qsub -q gpu select=1:ncpus=8:ngpus=1:mem=15gb:gpu_cap=cuda61


      but it's always killed because of exceeded number of processors. Even tho I increase number to 20, its still exceeded.



      I don't know why darknet consumes so many processors on the server, even tho i may run the job on my notebook with Intel i5 processor (which is slow and inefficient).



      What I've tried:



      1) Set cgroups=cpuacct which forces the job to NOT to use more processors then assigned, but it didn't work at all. Seem's like restriction works just in case server dont have resources for others. In the case there are free processors, the restriction doesnt work (https://drill.apache.org/docs/configuring-cgroups-to-control-cpu-usage/#cpu-limits)



      2) Set place=excelhost which does not kill the job in case it exceed assigned resources. On the other side, it takes like 7 days to even start the job with this flag and I have to train network every day.



      Question:



      I don't need these processors and i don't understand why the darknet uses so many of them. How may i force the job to NOT exceed the given number of processors ? Or some other idea how could i solve this kind of problem ?










      share|improve this question















      I'm using CNN called darknet/YOLO for deep learning on remote shared cluster with NVIDIA graphic cards. Remote cluster is linux with PBS job planning system.



      I'm submitting job to train neural network on GPU, which works well.



      Problem is in huge amount of consumed processors during the training. I usually submit a job with 8 processors, like this



      qsub -q gpu select=1:ncpus=8:ngpus=1:mem=15gb:gpu_cap=cuda61


      but it's always killed because of exceeded number of processors. Even tho I increase number to 20, its still exceeded.



      I don't know why darknet consumes so many processors on the server, even tho i may run the job on my notebook with Intel i5 processor (which is slow and inefficient).



      What I've tried:



      1) Set cgroups=cpuacct which forces the job to NOT to use more processors then assigned, but it didn't work at all. Seem's like restriction works just in case server dont have resources for others. In the case there are free processors, the restriction doesnt work (https://drill.apache.org/docs/configuring-cgroups-to-control-cpu-usage/#cpu-limits)



      2) Set place=excelhost which does not kill the job in case it exceed assigned resources. On the other side, it takes like 7 days to even start the job with this flag and I have to train network every day.



      Question:



      I don't need these processors and i don't understand why the darknet uses so many of them. How may i force the job to NOT exceed the given number of processors ? Or some other idea how could i solve this kind of problem ?







      linux deep-learning pbs yolo darknet






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 6 '18 at 14:51

























      asked Nov 19 '18 at 15:02









      Filip Kočica

      5,8572732




      5,8572732
























          1 Answer
          1






          active

          oldest

          votes


















          0














          It is more likely that it is a mismatch between admin set restrictions for that queue and your request. So ping your admin and get the details of the queues. (e.g queue1 ppm, gpu's)






          share|improve this answer





















            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53377358%2fcnn-training-exceeds-number-of-given-cores-in-pbs%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            It is more likely that it is a mismatch between admin set restrictions for that queue and your request. So ping your admin and get the details of the queues. (e.g queue1 ppm, gpu's)






            share|improve this answer


























              0














              It is more likely that it is a mismatch between admin set restrictions for that queue and your request. So ping your admin and get the details of the queues. (e.g queue1 ppm, gpu's)






              share|improve this answer
























                0












                0








                0






                It is more likely that it is a mismatch between admin set restrictions for that queue and your request. So ping your admin and get the details of the queues. (e.g queue1 ppm, gpu's)






                share|improve this answer












                It is more likely that it is a mismatch between admin set restrictions for that queue and your request. So ping your admin and get the details of the queues. (e.g queue1 ppm, gpu's)







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 19 '18 at 23:46









                nav

                675612




                675612






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53377358%2fcnn-training-exceeds-number-of-given-cores-in-pbs%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    MongoDB - Not Authorized To Execute Command

                    in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

                    Npm cannot find a required file even through it is in the searched directory