How to count and getting the sum of value for unique Ids in a Spark Dataframe?












0















I have the following Dataframe and i am looking to aggregate by ids and also sum the 'value' column for each unique id:



import org.apache.spark.sql.functions._
import spark.implicits._

// some data...
val df = Seq(
(1, 2),
(1, 4),
(1, 1),
(2, 2),
(2, 2),
(3, 2),
(3, 1),
(3, 1)
).toDF("id","value")

df.show()


gives the following:



+---+-----+
| id|value|
+---+-----+
| 1| 2|
| 1| 4|
| 1| 1|
| 2| 2|
| 2| 2|
| 3| 2|
| 3| 1|
| 3| 1|
+---+-----+


Using the count function I know I can count the unique ids:



df.select("id").groupBy($"id").count.orderBy($"id".asc).show()

+---+-----+
| id|count|
+---+-----+
| 1| 3|
| 2| 2|
| 3| 3|
+---+-----+


but I also want to sum (or get the average of) the values for each of the unique ids. So the resulting table should be as follows:



+---+-----+----------+
| id|count|valueCount|
+---+-----+----------+
| 1| 3| 7|
| 2| 2| 4|
| 3| 3| 4|
+---+-----+----------+


Is there a way to do this programatically?










share|improve this question





























    0















    I have the following Dataframe and i am looking to aggregate by ids and also sum the 'value' column for each unique id:



    import org.apache.spark.sql.functions._
    import spark.implicits._

    // some data...
    val df = Seq(
    (1, 2),
    (1, 4),
    (1, 1),
    (2, 2),
    (2, 2),
    (3, 2),
    (3, 1),
    (3, 1)
    ).toDF("id","value")

    df.show()


    gives the following:



    +---+-----+
    | id|value|
    +---+-----+
    | 1| 2|
    | 1| 4|
    | 1| 1|
    | 2| 2|
    | 2| 2|
    | 3| 2|
    | 3| 1|
    | 3| 1|
    +---+-----+


    Using the count function I know I can count the unique ids:



    df.select("id").groupBy($"id").count.orderBy($"id".asc).show()

    +---+-----+
    | id|count|
    +---+-----+
    | 1| 3|
    | 2| 2|
    | 3| 3|
    +---+-----+


    but I also want to sum (or get the average of) the values for each of the unique ids. So the resulting table should be as follows:



    +---+-----+----------+
    | id|count|valueCount|
    +---+-----+----------+
    | 1| 3| 7|
    | 2| 2| 4|
    | 3| 3| 4|
    +---+-----+----------+


    Is there a way to do this programatically?










    share|improve this question



























      0












      0








      0








      I have the following Dataframe and i am looking to aggregate by ids and also sum the 'value' column for each unique id:



      import org.apache.spark.sql.functions._
      import spark.implicits._

      // some data...
      val df = Seq(
      (1, 2),
      (1, 4),
      (1, 1),
      (2, 2),
      (2, 2),
      (3, 2),
      (3, 1),
      (3, 1)
      ).toDF("id","value")

      df.show()


      gives the following:



      +---+-----+
      | id|value|
      +---+-----+
      | 1| 2|
      | 1| 4|
      | 1| 1|
      | 2| 2|
      | 2| 2|
      | 3| 2|
      | 3| 1|
      | 3| 1|
      +---+-----+


      Using the count function I know I can count the unique ids:



      df.select("id").groupBy($"id").count.orderBy($"id".asc).show()

      +---+-----+
      | id|count|
      +---+-----+
      | 1| 3|
      | 2| 2|
      | 3| 3|
      +---+-----+


      but I also want to sum (or get the average of) the values for each of the unique ids. So the resulting table should be as follows:



      +---+-----+----------+
      | id|count|valueCount|
      +---+-----+----------+
      | 1| 3| 7|
      | 2| 2| 4|
      | 3| 3| 4|
      +---+-----+----------+


      Is there a way to do this programatically?










      share|improve this question
















      I have the following Dataframe and i am looking to aggregate by ids and also sum the 'value' column for each unique id:



      import org.apache.spark.sql.functions._
      import spark.implicits._

      // some data...
      val df = Seq(
      (1, 2),
      (1, 4),
      (1, 1),
      (2, 2),
      (2, 2),
      (3, 2),
      (3, 1),
      (3, 1)
      ).toDF("id","value")

      df.show()


      gives the following:



      +---+-----+
      | id|value|
      +---+-----+
      | 1| 2|
      | 1| 4|
      | 1| 1|
      | 2| 2|
      | 2| 2|
      | 3| 2|
      | 3| 1|
      | 3| 1|
      +---+-----+


      Using the count function I know I can count the unique ids:



      df.select("id").groupBy($"id").count.orderBy($"id".asc).show()

      +---+-----+
      | id|count|
      +---+-----+
      | 1| 3|
      | 2| 2|
      | 3| 3|
      +---+-----+


      but I also want to sum (or get the average of) the values for each of the unique ids. So the resulting table should be as follows:



      +---+-----+----------+
      | id|count|valueCount|
      +---+-----+----------+
      | 1| 3| 7|
      | 2| 2| 4|
      | 3| 3| 4|
      +---+-----+----------+


      Is there a way to do this programatically?







      apache-spark dataframe






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 20 '18 at 16:11









      user6910411

      33.4k97499




      33.4k97499










      asked Nov 20 '18 at 13:14









      Eoin LaneEoin Lane

      1791214




      1791214
























          1 Answer
          1






          active

          oldest

          votes


















          2














          The way to do it is to use aggregate functions. Sparks comes with a number of predefined ones (average, sum, count, first, collect list, collect set, min, max, ...), so you can always, on your example, do it like this :



          df.groupBy("id").agg(
          count("id").as("countOfIds"),
          sum("id").as("sumOfIds"),
          avg("id").as("avgOfIds")
          ).show
          +---+----------+--------+--------+
          | id|countOfIds|sumOfIds|avgOfIds|
          +---+----------+--------+--------+
          | 1| 3| 3| 1.0|
          | 3| 3| 9| 3.0|
          | 2| 2| 4| 2.0|
          +---+----------+--------+--------+


          You can view the defined functions inside the sql.function package documentation, by looking the ones defined as "Aggregate functions". All of those have a SQL syntax equivalent if you are using the SQL oriented syntax.






          share|improve this answer

























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53393820%2fhow-to-count-and-getting-the-sum-of-value-for-unique-ids-in-a-spark-dataframe%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            The way to do it is to use aggregate functions. Sparks comes with a number of predefined ones (average, sum, count, first, collect list, collect set, min, max, ...), so you can always, on your example, do it like this :



            df.groupBy("id").agg(
            count("id").as("countOfIds"),
            sum("id").as("sumOfIds"),
            avg("id").as("avgOfIds")
            ).show
            +---+----------+--------+--------+
            | id|countOfIds|sumOfIds|avgOfIds|
            +---+----------+--------+--------+
            | 1| 3| 3| 1.0|
            | 3| 3| 9| 3.0|
            | 2| 2| 4| 2.0|
            +---+----------+--------+--------+


            You can view the defined functions inside the sql.function package documentation, by looking the ones defined as "Aggregate functions". All of those have a SQL syntax equivalent if you are using the SQL oriented syntax.






            share|improve this answer






























              2














              The way to do it is to use aggregate functions. Sparks comes with a number of predefined ones (average, sum, count, first, collect list, collect set, min, max, ...), so you can always, on your example, do it like this :



              df.groupBy("id").agg(
              count("id").as("countOfIds"),
              sum("id").as("sumOfIds"),
              avg("id").as("avgOfIds")
              ).show
              +---+----------+--------+--------+
              | id|countOfIds|sumOfIds|avgOfIds|
              +---+----------+--------+--------+
              | 1| 3| 3| 1.0|
              | 3| 3| 9| 3.0|
              | 2| 2| 4| 2.0|
              +---+----------+--------+--------+


              You can view the defined functions inside the sql.function package documentation, by looking the ones defined as "Aggregate functions". All of those have a SQL syntax equivalent if you are using the SQL oriented syntax.






              share|improve this answer




























                2












                2








                2







                The way to do it is to use aggregate functions. Sparks comes with a number of predefined ones (average, sum, count, first, collect list, collect set, min, max, ...), so you can always, on your example, do it like this :



                df.groupBy("id").agg(
                count("id").as("countOfIds"),
                sum("id").as("sumOfIds"),
                avg("id").as("avgOfIds")
                ).show
                +---+----------+--------+--------+
                | id|countOfIds|sumOfIds|avgOfIds|
                +---+----------+--------+--------+
                | 1| 3| 3| 1.0|
                | 3| 3| 9| 3.0|
                | 2| 2| 4| 2.0|
                +---+----------+--------+--------+


                You can view the defined functions inside the sql.function package documentation, by looking the ones defined as "Aggregate functions". All of those have a SQL syntax equivalent if you are using the SQL oriented syntax.






                share|improve this answer















                The way to do it is to use aggregate functions. Sparks comes with a number of predefined ones (average, sum, count, first, collect list, collect set, min, max, ...), so you can always, on your example, do it like this :



                df.groupBy("id").agg(
                count("id").as("countOfIds"),
                sum("id").as("sumOfIds"),
                avg("id").as("avgOfIds")
                ).show
                +---+----------+--------+--------+
                | id|countOfIds|sumOfIds|avgOfIds|
                +---+----------+--------+--------+
                | 1| 3| 3| 1.0|
                | 3| 3| 9| 3.0|
                | 2| 2| 4| 2.0|
                +---+----------+--------+--------+


                You can view the defined functions inside the sql.function package documentation, by looking the ones defined as "Aggregate functions". All of those have a SQL syntax equivalent if you are using the SQL oriented syntax.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 20 '18 at 14:01

























                answered Nov 20 '18 at 13:23









                GPIGPI

                5,90112035




                5,90112035






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53393820%2fhow-to-count-and-getting-the-sum-of-value-for-unique-ids-in-a-spark-dataframe%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    MongoDB - Not Authorized To Execute Command

                    How to fix TextFormField cause rebuild widget in Flutter

                    Npm cannot find a required file even through it is in the searched directory