Metal Compute versus ARM Neon












0















I was considering migrating my current Neon (vector-processing instruction set for the ARM) code to Metal but after running the HelloCompute sample code (that demonstrates how to perform data-parallel computations using the GPU), the GPU seems much slower than using the CPU.



The HelloCompute project takes 13ms on a iPhone 5S to perform this very basic kernel on a 512 x 512 RGBA texture.



{
half4 inColor = inTexture.read(gid);
outTexture.write(inColor, gid);
}


In comparaison, my Neon code takes less than 1ms!!!



GPU should not be at least faster than the CPU?










share|improve this question























  • That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.

    – MoDJ
    Nov 20 '18 at 7:16











  • Could your test unintentionally be limited to the screen refresh rate?

    – Rhythmic Fistman
    Nov 20 '18 at 22:00
















0















I was considering migrating my current Neon (vector-processing instruction set for the ARM) code to Metal but after running the HelloCompute sample code (that demonstrates how to perform data-parallel computations using the GPU), the GPU seems much slower than using the CPU.



The HelloCompute project takes 13ms on a iPhone 5S to perform this very basic kernel on a 512 x 512 RGBA texture.



{
half4 inColor = inTexture.read(gid);
outTexture.write(inColor, gid);
}


In comparaison, my Neon code takes less than 1ms!!!



GPU should not be at least faster than the CPU?










share|improve this question























  • That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.

    – MoDJ
    Nov 20 '18 at 7:16











  • Could your test unintentionally be limited to the screen refresh rate?

    – Rhythmic Fistman
    Nov 20 '18 at 22:00














0












0








0








I was considering migrating my current Neon (vector-processing instruction set for the ARM) code to Metal but after running the HelloCompute sample code (that demonstrates how to perform data-parallel computations using the GPU), the GPU seems much slower than using the CPU.



The HelloCompute project takes 13ms on a iPhone 5S to perform this very basic kernel on a 512 x 512 RGBA texture.



{
half4 inColor = inTexture.read(gid);
outTexture.write(inColor, gid);
}


In comparaison, my Neon code takes less than 1ms!!!



GPU should not be at least faster than the CPU?










share|improve this question














I was considering migrating my current Neon (vector-processing instruction set for the ARM) code to Metal but after running the HelloCompute sample code (that demonstrates how to perform data-parallel computations using the GPU), the GPU seems much slower than using the CPU.



The HelloCompute project takes 13ms on a iPhone 5S to perform this very basic kernel on a 512 x 512 RGBA texture.



{
half4 inColor = inTexture.read(gid);
outTexture.write(inColor, gid);
}


In comparaison, my Neon code takes less than 1ms!!!



GPU should not be at least faster than the CPU?







metal neon






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 20 '18 at 2:47









YoshiYoshi

618




618













  • That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.

    – MoDJ
    Nov 20 '18 at 7:16











  • Could your test unintentionally be limited to the screen refresh rate?

    – Rhythmic Fistman
    Nov 20 '18 at 22:00



















  • That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.

    – MoDJ
    Nov 20 '18 at 7:16











  • Could your test unintentionally be limited to the screen refresh rate?

    – Rhythmic Fistman
    Nov 20 '18 at 22:00

















That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.

– MoDJ
Nov 20 '18 at 7:16





That is a hello world example, you don't want to use that to compare times, it is just a simple read and write. The GPU wins on more complex operations and really large amount of IO where the reads by different compute launches can run all at the same time. The value really depends on exactly what operations you are doing and how easily they can be done in parallel.

– MoDJ
Nov 20 '18 at 7:16













Could your test unintentionally be limited to the screen refresh rate?

– Rhythmic Fistman
Nov 20 '18 at 22:00





Could your test unintentionally be limited to the screen refresh rate?

– Rhythmic Fistman
Nov 20 '18 at 22:00












1 Answer
1






active

oldest

votes


















1














GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.



NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.



AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.



And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.






share|improve this answer

























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53385495%2fmetal-compute-versus-arm-neon%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.



    NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.



    AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.



    And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.






    share|improve this answer






























      1














      GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.



      NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.



      AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.



      And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.






      share|improve this answer




























        1












        1








        1







        GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.



        NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.



        AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.



        And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.






        share|improve this answer















        GPGPU only makes sense when dealing with a huge amount of computations, because the data transfer/ HW initialization time spoils the fun in addition to the horrible APIs such as OpenCL.



        NEON on the other hand is tightly integrated into the main pipeline and thus, is way more responsive while packing more than adequate punch.



        AI and crypto coin mining have been pretty much the only areas I've seen so far where GPGPU makes sense. For anything lighter, SIMD is the way to go.



        And since crypto coin mining is virtually dead, and IPs dedicated to AI related computing are around the corner, I'd say GPGPU is almost pointless.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 20 '18 at 3:22

























        answered Nov 20 '18 at 3:17









        Jake 'Alquimista' LEEJake 'Alquimista' LEE

        3,36111219




        3,36111219






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53385495%2fmetal-compute-versus-arm-neon%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            How to fix TextFormField cause rebuild widget in Flutter

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith