Check for empty row within spark dataframe?












0














Running over several csv files and i am trying to run and do some checks and for some reason for one file i am getting a NullPointerException and i am suspecting that there are some empty row.



So i am running the following and for some reason it gives me an OK output:



check_empty = lambda row : not any([False if k is None else True for k in row])
check_empty_udf = sf.udf(check_empty, BooleanType())
df.filter(check_empty_udf(sf.struct([col for col in df.columns]))).show()


I am missing something within the filter function or we can't extract empty rows from dataframes.










share|improve this question



























    0














    Running over several csv files and i am trying to run and do some checks and for some reason for one file i am getting a NullPointerException and i am suspecting that there are some empty row.



    So i am running the following and for some reason it gives me an OK output:



    check_empty = lambda row : not any([False if k is None else True for k in row])
    check_empty_udf = sf.udf(check_empty, BooleanType())
    df.filter(check_empty_udf(sf.struct([col for col in df.columns]))).show()


    I am missing something within the filter function or we can't extract empty rows from dataframes.










    share|improve this question

























      0












      0








      0







      Running over several csv files and i am trying to run and do some checks and for some reason for one file i am getting a NullPointerException and i am suspecting that there are some empty row.



      So i am running the following and for some reason it gives me an OK output:



      check_empty = lambda row : not any([False if k is None else True for k in row])
      check_empty_udf = sf.udf(check_empty, BooleanType())
      df.filter(check_empty_udf(sf.struct([col for col in df.columns]))).show()


      I am missing something within the filter function or we can't extract empty rows from dataframes.










      share|improve this question













      Running over several csv files and i am trying to run and do some checks and for some reason for one file i am getting a NullPointerException and i am suspecting that there are some empty row.



      So i am running the following and for some reason it gives me an OK output:



      check_empty = lambda row : not any([False if k is None else True for k in row])
      check_empty_udf = sf.udf(check_empty, BooleanType())
      df.filter(check_empty_udf(sf.struct([col for col in df.columns]))).show()


      I am missing something within the filter function or we can't extract empty rows from dataframes.







      apache-spark pyspark






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 19 '18 at 14:10









      ziedTn

      100212




      100212
























          2 Answers
          2






          active

          oldest

          votes


















          2














          You could use df.dropna() to drop empty rows and then compare the counts.



          Something like



          df_clean = df.dropna()
          num_empty_rows = df.count() - df_clean.count()





          share|improve this answer























          • Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
            – ziedTn
            Nov 20 '18 at 7:08










          • the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
            – ziedTn
            Nov 20 '18 at 8:59



















          0














          You could use an inbuilt option for dealing with such scenarios.



          val df = spark.read
          .format("csv")
          .option("header", "true")
          .option("mode", "DROPMALFORMED") // Drop empty/malformed rows
          .load("hdfs:///path/file.csv")


          Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files






          share|improve this answer





















            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53376449%2fcheck-for-empty-row-within-spark-dataframe%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            You could use df.dropna() to drop empty rows and then compare the counts.



            Something like



            df_clean = df.dropna()
            num_empty_rows = df.count() - df_clean.count()





            share|improve this answer























            • Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
              – ziedTn
              Nov 20 '18 at 7:08










            • the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
              – ziedTn
              Nov 20 '18 at 8:59
















            2














            You could use df.dropna() to drop empty rows and then compare the counts.



            Something like



            df_clean = df.dropna()
            num_empty_rows = df.count() - df_clean.count()





            share|improve this answer























            • Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
              – ziedTn
              Nov 20 '18 at 7:08










            • the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
              – ziedTn
              Nov 20 '18 at 8:59














            2












            2








            2






            You could use df.dropna() to drop empty rows and then compare the counts.



            Something like



            df_clean = df.dropna()
            num_empty_rows = df.count() - df_clean.count()





            share|improve this answer














            You could use df.dropna() to drop empty rows and then compare the counts.



            Something like



            df_clean = df.dropna()
            num_empty_rows = df.count() - df_clean.count()






            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 19 '18 at 15:22









            shriyog

            510617




            510617










            answered Nov 19 '18 at 14:26









            Andrew F

            61539




            61539












            • Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
              – ziedTn
              Nov 20 '18 at 7:08










            • the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
              – ziedTn
              Nov 20 '18 at 8:59


















            • Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
              – ziedTn
              Nov 20 '18 at 7:08










            • the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
              – ziedTn
              Nov 20 '18 at 8:59
















            Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
            – ziedTn
            Nov 20 '18 at 7:08




            Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
            – ziedTn
            Nov 20 '18 at 7:08












            the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
            – ziedTn
            Nov 20 '18 at 8:59




            the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
            – ziedTn
            Nov 20 '18 at 8:59













            0














            You could use an inbuilt option for dealing with such scenarios.



            val df = spark.read
            .format("csv")
            .option("header", "true")
            .option("mode", "DROPMALFORMED") // Drop empty/malformed rows
            .load("hdfs:///path/file.csv")


            Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files






            share|improve this answer


























              0














              You could use an inbuilt option for dealing with such scenarios.



              val df = spark.read
              .format("csv")
              .option("header", "true")
              .option("mode", "DROPMALFORMED") // Drop empty/malformed rows
              .load("hdfs:///path/file.csv")


              Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files






              share|improve this answer
























                0












                0








                0






                You could use an inbuilt option for dealing with such scenarios.



                val df = spark.read
                .format("csv")
                .option("header", "true")
                .option("mode", "DROPMALFORMED") // Drop empty/malformed rows
                .load("hdfs:///path/file.csv")


                Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files






                share|improve this answer












                You could use an inbuilt option for dealing with such scenarios.



                val df = spark.read
                .format("csv")
                .option("header", "true")
                .option("mode", "DROPMALFORMED") // Drop empty/malformed rows
                .load("hdfs:///path/file.csv")


                Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 19 '18 at 14:59









                shriyog

                510617




                510617






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53376449%2fcheck-for-empty-row-within-spark-dataframe%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    MongoDB - Not Authorized To Execute Command

                    in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

                    How to fix TextFormField cause rebuild widget in Flutter