Check for empty row within spark dataframe?
Running over several csv files and i am trying to run and do some checks and for some reason for one file i am getting a NullPointerException
and i am suspecting that there are some empty row.
So i am running the following and for some reason it gives me an OK
output:
check_empty = lambda row : not any([False if k is None else True for k in row])
check_empty_udf = sf.udf(check_empty, BooleanType())
df.filter(check_empty_udf(sf.struct([col for col in df.columns]))).show()
I am missing something within the filter function or we can't extract empty rows from dataframes.
apache-spark pyspark
add a comment |
Running over several csv files and i am trying to run and do some checks and for some reason for one file i am getting a NullPointerException
and i am suspecting that there are some empty row.
So i am running the following and for some reason it gives me an OK
output:
check_empty = lambda row : not any([False if k is None else True for k in row])
check_empty_udf = sf.udf(check_empty, BooleanType())
df.filter(check_empty_udf(sf.struct([col for col in df.columns]))).show()
I am missing something within the filter function or we can't extract empty rows from dataframes.
apache-spark pyspark
add a comment |
Running over several csv files and i am trying to run and do some checks and for some reason for one file i am getting a NullPointerException
and i am suspecting that there are some empty row.
So i am running the following and for some reason it gives me an OK
output:
check_empty = lambda row : not any([False if k is None else True for k in row])
check_empty_udf = sf.udf(check_empty, BooleanType())
df.filter(check_empty_udf(sf.struct([col for col in df.columns]))).show()
I am missing something within the filter function or we can't extract empty rows from dataframes.
apache-spark pyspark
Running over several csv files and i am trying to run and do some checks and for some reason for one file i am getting a NullPointerException
and i am suspecting that there are some empty row.
So i am running the following and for some reason it gives me an OK
output:
check_empty = lambda row : not any([False if k is None else True for k in row])
check_empty_udf = sf.udf(check_empty, BooleanType())
df.filter(check_empty_udf(sf.struct([col for col in df.columns]))).show()
I am missing something within the filter function or we can't extract empty rows from dataframes.
apache-spark pyspark
apache-spark pyspark
asked Nov 19 '18 at 14:10


ziedTn
100212
100212
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
You could use df.dropna() to drop empty rows and then compare the counts.
Something like
df_clean = df.dropna()
num_empty_rows = df.count() - df_clean.count()
Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
– ziedTn
Nov 20 '18 at 7:08
the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
– ziedTn
Nov 20 '18 at 8:59
add a comment |
You could use an inbuilt option for dealing with such scenarios.
val df = spark.read
.format("csv")
.option("header", "true")
.option("mode", "DROPMALFORMED") // Drop empty/malformed rows
.load("hdfs:///path/file.csv")
Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53376449%2fcheck-for-empty-row-within-spark-dataframe%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
You could use df.dropna() to drop empty rows and then compare the counts.
Something like
df_clean = df.dropna()
num_empty_rows = df.count() - df_clean.count()
Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
– ziedTn
Nov 20 '18 at 7:08
the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
– ziedTn
Nov 20 '18 at 8:59
add a comment |
You could use df.dropna() to drop empty rows and then compare the counts.
Something like
df_clean = df.dropna()
num_empty_rows = df.count() - df_clean.count()
Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
– ziedTn
Nov 20 '18 at 7:08
the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
– ziedTn
Nov 20 '18 at 8:59
add a comment |
You could use df.dropna() to drop empty rows and then compare the counts.
Something like
df_clean = df.dropna()
num_empty_rows = df.count() - df_clean.count()
You could use df.dropna() to drop empty rows and then compare the counts.
Something like
df_clean = df.dropna()
num_empty_rows = df.count() - df_clean.count()
edited Nov 19 '18 at 15:22


shriyog
510617
510617
answered Nov 19 '18 at 14:26


Andrew F
61539
61539
Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
– ziedTn
Nov 20 '18 at 7:08
the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
– ziedTn
Nov 20 '18 at 8:59
add a comment |
Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
– ziedTn
Nov 20 '18 at 7:08
the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
– ziedTn
Nov 20 '18 at 8:59
Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
– ziedTn
Nov 20 '18 at 7:08
Thanks Andrew, but i would like to check the content of those rows so i have more clear idea of what's happening.
– ziedTn
Nov 20 '18 at 7:08
the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
– ziedTn
Nov 20 '18 at 8:59
the wired things that i got zero, but the same piece of code has works fine with dataframe resulted from dropna transfromation instead it throw the exception without the one dropna
– ziedTn
Nov 20 '18 at 8:59
add a comment |
You could use an inbuilt option for dealing with such scenarios.
val df = spark.read
.format("csv")
.option("header", "true")
.option("mode", "DROPMALFORMED") // Drop empty/malformed rows
.load("hdfs:///path/file.csv")
Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files
add a comment |
You could use an inbuilt option for dealing with such scenarios.
val df = spark.read
.format("csv")
.option("header", "true")
.option("mode", "DROPMALFORMED") // Drop empty/malformed rows
.load("hdfs:///path/file.csv")
Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files
add a comment |
You could use an inbuilt option for dealing with such scenarios.
val df = spark.read
.format("csv")
.option("header", "true")
.option("mode", "DROPMALFORMED") // Drop empty/malformed rows
.load("hdfs:///path/file.csv")
Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files
You could use an inbuilt option for dealing with such scenarios.
val df = spark.read
.format("csv")
.option("header", "true")
.option("mode", "DROPMALFORMED") // Drop empty/malformed rows
.load("hdfs:///path/file.csv")
Check this reference - https://docs.databricks.com/spark/latest/data-sources/read-csv.html#reading-files
answered Nov 19 '18 at 14:59


shriyog
510617
510617
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53376449%2fcheck-for-empty-row-within-spark-dataframe%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown