Sum of multiple columns based on condition












0














I want to sum up an array of columns per row based on a condition.
I found a way to do it but it does not seem like a good idea when I have 20+ columns to sum up since it would generate an additional column for each one.



Wanted result is: Sum of values for all columns that end with "_val" where value is either 0 or 1 (or <2, i just want to exclude the value 3 right now)



 val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")


My solution with the wanted result in the column sum



val channelNames = df1.schema.fieldNames.filter(_.endsWith("_val"))
val ch = channelNames.map(x => col(x+"_redval"))

val df2 = df1.select(col("*") +: (channelNames.map(c =>
when(col(c) === 1, lit(1))
.otherwise(lit(0)).as(c+"_redval"))): _*)

val df3 = df2.withColumn("sum", ch.reduce(_+_))
df3.show()


Example output:



+---+-------+--------+-------+--------------+---------------+--------------+---+
| id|bla_val|blub_val|bli_val|bla_val_redval|blub_val_redval|bli_val_redval|sum|
+---+-------+--------+-------+--------------+---------------+--------------+---+
|id1| 1| 0| 3| 1| 0| 0| 1|
|id2| 0| 0| 3| 0| 0| 0| 0|
|id3| 1| 1| 3| 1| 1| 0| 2|
+---+-------+--------+-------+--------------+---------------+--------------+---+










share|improve this question






















  • if you want to do it dynamically, then what you have now is correct.. dont worry abt additional columns.. it lets you manipulate easily.. you can drop those columns later.
    – stack0114106
    Nov 19 '18 at 16:39
















0














I want to sum up an array of columns per row based on a condition.
I found a way to do it but it does not seem like a good idea when I have 20+ columns to sum up since it would generate an additional column for each one.



Wanted result is: Sum of values for all columns that end with "_val" where value is either 0 or 1 (or <2, i just want to exclude the value 3 right now)



 val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")


My solution with the wanted result in the column sum



val channelNames = df1.schema.fieldNames.filter(_.endsWith("_val"))
val ch = channelNames.map(x => col(x+"_redval"))

val df2 = df1.select(col("*") +: (channelNames.map(c =>
when(col(c) === 1, lit(1))
.otherwise(lit(0)).as(c+"_redval"))): _*)

val df3 = df2.withColumn("sum", ch.reduce(_+_))
df3.show()


Example output:



+---+-------+--------+-------+--------------+---------------+--------------+---+
| id|bla_val|blub_val|bli_val|bla_val_redval|blub_val_redval|bli_val_redval|sum|
+---+-------+--------+-------+--------------+---------------+--------------+---+
|id1| 1| 0| 3| 1| 0| 0| 1|
|id2| 0| 0| 3| 0| 0| 0| 0|
|id3| 1| 1| 3| 1| 1| 0| 2|
+---+-------+--------+-------+--------------+---------------+--------------+---+










share|improve this question






















  • if you want to do it dynamically, then what you have now is correct.. dont worry abt additional columns.. it lets you manipulate easily.. you can drop those columns later.
    – stack0114106
    Nov 19 '18 at 16:39














0












0








0


1





I want to sum up an array of columns per row based on a condition.
I found a way to do it but it does not seem like a good idea when I have 20+ columns to sum up since it would generate an additional column for each one.



Wanted result is: Sum of values for all columns that end with "_val" where value is either 0 or 1 (or <2, i just want to exclude the value 3 right now)



 val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")


My solution with the wanted result in the column sum



val channelNames = df1.schema.fieldNames.filter(_.endsWith("_val"))
val ch = channelNames.map(x => col(x+"_redval"))

val df2 = df1.select(col("*") +: (channelNames.map(c =>
when(col(c) === 1, lit(1))
.otherwise(lit(0)).as(c+"_redval"))): _*)

val df3 = df2.withColumn("sum", ch.reduce(_+_))
df3.show()


Example output:



+---+-------+--------+-------+--------------+---------------+--------------+---+
| id|bla_val|blub_val|bli_val|bla_val_redval|blub_val_redval|bli_val_redval|sum|
+---+-------+--------+-------+--------------+---------------+--------------+---+
|id1| 1| 0| 3| 1| 0| 0| 1|
|id2| 0| 0| 3| 0| 0| 0| 0|
|id3| 1| 1| 3| 1| 1| 0| 2|
+---+-------+--------+-------+--------------+---------------+--------------+---+










share|improve this question













I want to sum up an array of columns per row based on a condition.
I found a way to do it but it does not seem like a good idea when I have 20+ columns to sum up since it would generate an additional column for each one.



Wanted result is: Sum of values for all columns that end with "_val" where value is either 0 or 1 (or <2, i just want to exclude the value 3 right now)



 val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")


My solution with the wanted result in the column sum



val channelNames = df1.schema.fieldNames.filter(_.endsWith("_val"))
val ch = channelNames.map(x => col(x+"_redval"))

val df2 = df1.select(col("*") +: (channelNames.map(c =>
when(col(c) === 1, lit(1))
.otherwise(lit(0)).as(c+"_redval"))): _*)

val df3 = df2.withColumn("sum", ch.reduce(_+_))
df3.show()


Example output:



+---+-------+--------+-------+--------------+---------------+--------------+---+
| id|bla_val|blub_val|bli_val|bla_val_redval|blub_val_redval|bli_val_redval|sum|
+---+-------+--------+-------+--------------+---------------+--------------+---+
|id1| 1| 0| 3| 1| 0| 0| 1|
|id2| 0| 0| 3| 0| 0| 0| 0|
|id3| 1| 1| 3| 1| 1| 0| 2|
+---+-------+--------+-------+--------------+---------------+--------------+---+







scala apache-spark






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 19 '18 at 14:55









user2811630

1287




1287












  • if you want to do it dynamically, then what you have now is correct.. dont worry abt additional columns.. it lets you manipulate easily.. you can drop those columns later.
    – stack0114106
    Nov 19 '18 at 16:39


















  • if you want to do it dynamically, then what you have now is correct.. dont worry abt additional columns.. it lets you manipulate easily.. you can drop those columns later.
    – stack0114106
    Nov 19 '18 at 16:39
















if you want to do it dynamically, then what you have now is correct.. dont worry abt additional columns.. it lets you manipulate easily.. you can drop those columns later.
– stack0114106
Nov 19 '18 at 16:39




if you want to do it dynamically, then what you have now is correct.. dont worry abt additional columns.. it lets you manipulate easily.. you can drop those columns later.
– stack0114106
Nov 19 '18 at 16:39












2 Answers
2






active

oldest

votes


















1














You can combine the expression using reduce() operation. Check this out:



val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")

val newcols= df1.columns.filter(_.endsWith("_val")).map( x=> when(col(x)===1, lit(1)).otherwise(lit(0))).reduce(_+_)
df1.withColumn("redval_count",newcols).show(false)


Output:



+---+-------+--------+-------+------------+
|id |bla_val|blub_val|bli_val|redval_count|
+---+-------+--------+-------+------------+
|id1|1 |0 |3 |1 |
|id2|0 |0 |3 |0 |
|id3|1 |1 |3 |2 |
+---+-------+--------+-------+------------+





share|improve this answer





















  • Thank you very much.. tested your code right now and it worked like a charm. One little question: newcols is of datatype column. From my understanding this is something like a dataframe with a single column containing the reduced values. How does spark now to which entry of the dataframe a entry of the "column"(newcols) has to be appended?
    – user2811630
    Nov 21 '18 at 8:07












  • It is not a dataframe.. just expression of columns.. we can use it for any dataframe that has those columns..
    – stack0114106
    Nov 21 '18 at 8:32



















-1














def sumNot3(s: Seq[Int]): Int = {
s.filter(_ != 3).sum
}

val sumNot3Udf = udf(sumNot3(_: Seq[Int]))

val channelNameCols = df1.schema.fieldNames.filter(_.endsWith("_val")).map(c => col(c))

df1.select(sumNot3Udf(array(channelNameCols: _*)).as("sum"))





share|improve this answer

















  • 1




    While this might answer the authors question, it lacks some explaining words and/or links to documentation. Raw code snippets are not very helpful without some phrases around them. You may also find how to write a good answer very helpful. Please edit your answer - From Review
    – Nick
    Nov 20 '18 at 1:09











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53377235%2fsum-of-multiple-columns-based-on-condition%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














You can combine the expression using reduce() operation. Check this out:



val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")

val newcols= df1.columns.filter(_.endsWith("_val")).map( x=> when(col(x)===1, lit(1)).otherwise(lit(0))).reduce(_+_)
df1.withColumn("redval_count",newcols).show(false)


Output:



+---+-------+--------+-------+------------+
|id |bla_val|blub_val|bli_val|redval_count|
+---+-------+--------+-------+------------+
|id1|1 |0 |3 |1 |
|id2|0 |0 |3 |0 |
|id3|1 |1 |3 |2 |
+---+-------+--------+-------+------------+





share|improve this answer





















  • Thank you very much.. tested your code right now and it worked like a charm. One little question: newcols is of datatype column. From my understanding this is something like a dataframe with a single column containing the reduced values. How does spark now to which entry of the dataframe a entry of the "column"(newcols) has to be appended?
    – user2811630
    Nov 21 '18 at 8:07












  • It is not a dataframe.. just expression of columns.. we can use it for any dataframe that has those columns..
    – stack0114106
    Nov 21 '18 at 8:32
















1














You can combine the expression using reduce() operation. Check this out:



val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")

val newcols= df1.columns.filter(_.endsWith("_val")).map( x=> when(col(x)===1, lit(1)).otherwise(lit(0))).reduce(_+_)
df1.withColumn("redval_count",newcols).show(false)


Output:



+---+-------+--------+-------+------------+
|id |bla_val|blub_val|bli_val|redval_count|
+---+-------+--------+-------+------------+
|id1|1 |0 |3 |1 |
|id2|0 |0 |3 |0 |
|id3|1 |1 |3 |2 |
+---+-------+--------+-------+------------+





share|improve this answer





















  • Thank you very much.. tested your code right now and it worked like a charm. One little question: newcols is of datatype column. From my understanding this is something like a dataframe with a single column containing the reduced values. How does spark now to which entry of the dataframe a entry of the "column"(newcols) has to be appended?
    – user2811630
    Nov 21 '18 at 8:07












  • It is not a dataframe.. just expression of columns.. we can use it for any dataframe that has those columns..
    – stack0114106
    Nov 21 '18 at 8:32














1












1








1






You can combine the expression using reduce() operation. Check this out:



val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")

val newcols= df1.columns.filter(_.endsWith("_val")).map( x=> when(col(x)===1, lit(1)).otherwise(lit(0))).reduce(_+_)
df1.withColumn("redval_count",newcols).show(false)


Output:



+---+-------+--------+-------+------------+
|id |bla_val|blub_val|bli_val|redval_count|
+---+-------+--------+-------+------------+
|id1|1 |0 |3 |1 |
|id2|0 |0 |3 |0 |
|id3|1 |1 |3 |2 |
+---+-------+--------+-------+------------+





share|improve this answer












You can combine the expression using reduce() operation. Check this out:



val df1 = Seq(
("id1", 1, 0, 3),
("id2", 0, 0, 3),
("id3", 1, 1, 3))
.toDF("id", "bla_val", "blub_val", "bli_val")

val newcols= df1.columns.filter(_.endsWith("_val")).map( x=> when(col(x)===1, lit(1)).otherwise(lit(0))).reduce(_+_)
df1.withColumn("redval_count",newcols).show(false)


Output:



+---+-------+--------+-------+------------+
|id |bla_val|blub_val|bli_val|redval_count|
+---+-------+--------+-------+------------+
|id1|1 |0 |3 |1 |
|id2|0 |0 |3 |0 |
|id3|1 |1 |3 |2 |
+---+-------+--------+-------+------------+






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 19 '18 at 20:37









stack0114106

2,2211417




2,2211417












  • Thank you very much.. tested your code right now and it worked like a charm. One little question: newcols is of datatype column. From my understanding this is something like a dataframe with a single column containing the reduced values. How does spark now to which entry of the dataframe a entry of the "column"(newcols) has to be appended?
    – user2811630
    Nov 21 '18 at 8:07












  • It is not a dataframe.. just expression of columns.. we can use it for any dataframe that has those columns..
    – stack0114106
    Nov 21 '18 at 8:32


















  • Thank you very much.. tested your code right now and it worked like a charm. One little question: newcols is of datatype column. From my understanding this is something like a dataframe with a single column containing the reduced values. How does spark now to which entry of the dataframe a entry of the "column"(newcols) has to be appended?
    – user2811630
    Nov 21 '18 at 8:07












  • It is not a dataframe.. just expression of columns.. we can use it for any dataframe that has those columns..
    – stack0114106
    Nov 21 '18 at 8:32
















Thank you very much.. tested your code right now and it worked like a charm. One little question: newcols is of datatype column. From my understanding this is something like a dataframe with a single column containing the reduced values. How does spark now to which entry of the dataframe a entry of the "column"(newcols) has to be appended?
– user2811630
Nov 21 '18 at 8:07






Thank you very much.. tested your code right now and it worked like a charm. One little question: newcols is of datatype column. From my understanding this is something like a dataframe with a single column containing the reduced values. How does spark now to which entry of the dataframe a entry of the "column"(newcols) has to be appended?
– user2811630
Nov 21 '18 at 8:07














It is not a dataframe.. just expression of columns.. we can use it for any dataframe that has those columns..
– stack0114106
Nov 21 '18 at 8:32




It is not a dataframe.. just expression of columns.. we can use it for any dataframe that has those columns..
– stack0114106
Nov 21 '18 at 8:32













-1














def sumNot3(s: Seq[Int]): Int = {
s.filter(_ != 3).sum
}

val sumNot3Udf = udf(sumNot3(_: Seq[Int]))

val channelNameCols = df1.schema.fieldNames.filter(_.endsWith("_val")).map(c => col(c))

df1.select(sumNot3Udf(array(channelNameCols: _*)).as("sum"))





share|improve this answer

















  • 1




    While this might answer the authors question, it lacks some explaining words and/or links to documentation. Raw code snippets are not very helpful without some phrases around them. You may also find how to write a good answer very helpful. Please edit your answer - From Review
    – Nick
    Nov 20 '18 at 1:09
















-1














def sumNot3(s: Seq[Int]): Int = {
s.filter(_ != 3).sum
}

val sumNot3Udf = udf(sumNot3(_: Seq[Int]))

val channelNameCols = df1.schema.fieldNames.filter(_.endsWith("_val")).map(c => col(c))

df1.select(sumNot3Udf(array(channelNameCols: _*)).as("sum"))





share|improve this answer

















  • 1




    While this might answer the authors question, it lacks some explaining words and/or links to documentation. Raw code snippets are not very helpful without some phrases around them. You may also find how to write a good answer very helpful. Please edit your answer - From Review
    – Nick
    Nov 20 '18 at 1:09














-1












-1








-1






def sumNot3(s: Seq[Int]): Int = {
s.filter(_ != 3).sum
}

val sumNot3Udf = udf(sumNot3(_: Seq[Int]))

val channelNameCols = df1.schema.fieldNames.filter(_.endsWith("_val")).map(c => col(c))

df1.select(sumNot3Udf(array(channelNameCols: _*)).as("sum"))





share|improve this answer












def sumNot3(s: Seq[Int]): Int = {
s.filter(_ != 3).sum
}

val sumNot3Udf = udf(sumNot3(_: Seq[Int]))

val channelNameCols = df1.schema.fieldNames.filter(_.endsWith("_val")).map(c => col(c))

df1.select(sumNot3Udf(array(channelNameCols: _*)).as("sum"))






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 19 '18 at 15:55









Terry Dactyl

1,104412




1,104412








  • 1




    While this might answer the authors question, it lacks some explaining words and/or links to documentation. Raw code snippets are not very helpful without some phrases around them. You may also find how to write a good answer very helpful. Please edit your answer - From Review
    – Nick
    Nov 20 '18 at 1:09














  • 1




    While this might answer the authors question, it lacks some explaining words and/or links to documentation. Raw code snippets are not very helpful without some phrases around them. You may also find how to write a good answer very helpful. Please edit your answer - From Review
    – Nick
    Nov 20 '18 at 1:09








1




1




While this might answer the authors question, it lacks some explaining words and/or links to documentation. Raw code snippets are not very helpful without some phrases around them. You may also find how to write a good answer very helpful. Please edit your answer - From Review
– Nick
Nov 20 '18 at 1:09




While this might answer the authors question, it lacks some explaining words and/or links to documentation. Raw code snippets are not very helpful without some phrases around them. You may also find how to write a good answer very helpful. Please edit your answer - From Review
– Nick
Nov 20 '18 at 1:09


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53377235%2fsum-of-multiple-columns-based-on-condition%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

A Topological Invariant for $pi_3(U(n))$