Can i distribute training and inference DNN architecture over cloud and edge devices?












-1















I'm doing research about distributed DNN, from what I got, we can distribute DNN computation over many GPUs and also we can do on our mobile devices. Inference architectures are usually single platform, so either exist on mobile or in the cloud.



My question is:



Can we distribute training and inference phase in DNN architecture in a joint platform (in both cloud and mobile)? if it possible, how to do that?










share|improve this question





























    -1















    I'm doing research about distributed DNN, from what I got, we can distribute DNN computation over many GPUs and also we can do on our mobile devices. Inference architectures are usually single platform, so either exist on mobile or in the cloud.



    My question is:



    Can we distribute training and inference phase in DNN architecture in a joint platform (in both cloud and mobile)? if it possible, how to do that?










    share|improve this question



























      -1












      -1








      -1








      I'm doing research about distributed DNN, from what I got, we can distribute DNN computation over many GPUs and also we can do on our mobile devices. Inference architectures are usually single platform, so either exist on mobile or in the cloud.



      My question is:



      Can we distribute training and inference phase in DNN architecture in a joint platform (in both cloud and mobile)? if it possible, how to do that?










      share|improve this question
















      I'm doing research about distributed DNN, from what I got, we can distribute DNN computation over many GPUs and also we can do on our mobile devices. Inference architectures are usually single platform, so either exist on mobile or in the cloud.



      My question is:



      Can we distribute training and inference phase in DNN architecture in a joint platform (in both cloud and mobile)? if it possible, how to do that?







      tensorflow






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 3 at 0:29









      feliks

      1,015216




      1,015216










      asked Jan 2 at 11:51









      ienxienx

      32




      32
























          1 Answer
          1






          active

          oldest

          votes


















          -1














          There's a plethora of options to choose from, depending on your framework. Horovod is mostly framework agnostic and can be used for distributed training. It also satisfies your need to use cloud services. Although it is entirely possible to create an own framework using Distributed Tensorflow, you should be aware that this is a more low-level approach than Horovod and therefore is missing some bells and whistles.



          Distributed inference on the other hand is not as common, since inference itself does not require as much computational power as training, and is embarassingly parallelizable most of the time.






          share|improve this answer

























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54005850%2fcan-i-distribute-training-and-inference-dnn-architecture-over-cloud-and-edge-dev%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            -1














            There's a plethora of options to choose from, depending on your framework. Horovod is mostly framework agnostic and can be used for distributed training. It also satisfies your need to use cloud services. Although it is entirely possible to create an own framework using Distributed Tensorflow, you should be aware that this is a more low-level approach than Horovod and therefore is missing some bells and whistles.



            Distributed inference on the other hand is not as common, since inference itself does not require as much computational power as training, and is embarassingly parallelizable most of the time.






            share|improve this answer






























              -1














              There's a plethora of options to choose from, depending on your framework. Horovod is mostly framework agnostic and can be used for distributed training. It also satisfies your need to use cloud services. Although it is entirely possible to create an own framework using Distributed Tensorflow, you should be aware that this is a more low-level approach than Horovod and therefore is missing some bells and whistles.



              Distributed inference on the other hand is not as common, since inference itself does not require as much computational power as training, and is embarassingly parallelizable most of the time.






              share|improve this answer




























                -1












                -1








                -1







                There's a plethora of options to choose from, depending on your framework. Horovod is mostly framework agnostic and can be used for distributed training. It also satisfies your need to use cloud services. Although it is entirely possible to create an own framework using Distributed Tensorflow, you should be aware that this is a more low-level approach than Horovod and therefore is missing some bells and whistles.



                Distributed inference on the other hand is not as common, since inference itself does not require as much computational power as training, and is embarassingly parallelizable most of the time.






                share|improve this answer















                There's a plethora of options to choose from, depending on your framework. Horovod is mostly framework agnostic and can be used for distributed training. It also satisfies your need to use cloud services. Although it is entirely possible to create an own framework using Distributed Tensorflow, you should be aware that this is a more low-level approach than Horovod and therefore is missing some bells and whistles.



                Distributed inference on the other hand is not as common, since inference itself does not require as much computational power as training, and is embarassingly parallelizable most of the time.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Jan 2 at 13:32

























                answered Jan 2 at 13:24









                feliksfeliks

                1,015216




                1,015216
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54005850%2fcan-i-distribute-training-and-inference-dnn-architecture-over-cloud-and-edge-dev%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

                    SQL update select statement

                    'app-layout' is not a known element: how to share Component with different Modules