I want to calculate using three columns and produce single column with showing all three values












-2















I am loading a file in dataframe in spark databrick



spark.sql("""select A,X,Y,Z from fruits""")



A X Y Z
1E5 1.000 0.000 0.000
1U2 2.000 5.000 0.000
5G6 3.000 0.000 10.000


I need output as



    A      D  
1E5 X 1
1U2 X 2, Y 5
5G6 X 3, Z 10


I am able to find the solution.










share|improve this question

























  • can you add more details what you are trying to do and what did not worked?

    – Shankar Koirala
    Nov 21 '18 at 16:44
















-2















I am loading a file in dataframe in spark databrick



spark.sql("""select A,X,Y,Z from fruits""")



A X Y Z
1E5 1.000 0.000 0.000
1U2 2.000 5.000 0.000
5G6 3.000 0.000 10.000


I need output as



    A      D  
1E5 X 1
1U2 X 2, Y 5
5G6 X 3, Z 10


I am able to find the solution.










share|improve this question

























  • can you add more details what you are trying to do and what did not worked?

    – Shankar Koirala
    Nov 21 '18 at 16:44














-2












-2








-2








I am loading a file in dataframe in spark databrick



spark.sql("""select A,X,Y,Z from fruits""")



A X Y Z
1E5 1.000 0.000 0.000
1U2 2.000 5.000 0.000
5G6 3.000 0.000 10.000


I need output as



    A      D  
1E5 X 1
1U2 X 2, Y 5
5G6 X 3, Z 10


I am able to find the solution.










share|improve this question
















I am loading a file in dataframe in spark databrick



spark.sql("""select A,X,Y,Z from fruits""")



A X Y Z
1E5 1.000 0.000 0.000
1U2 2.000 5.000 0.000
5G6 3.000 0.000 10.000


I need output as



    A      D  
1E5 X 1
1U2 X 2, Y 5
5G6 X 3, Z 10


I am able to find the solution.







scala apache-spark apache-spark-sql






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 21 '18 at 17:31







Ravi Anand Vicky

















asked Nov 21 '18 at 15:54









Ravi Anand VickyRavi Anand Vicky

17




17













  • can you add more details what you are trying to do and what did not worked?

    – Shankar Koirala
    Nov 21 '18 at 16:44



















  • can you add more details what you are trying to do and what did not worked?

    – Shankar Koirala
    Nov 21 '18 at 16:44

















can you add more details what you are trying to do and what did not worked?

– Shankar Koirala
Nov 21 '18 at 16:44





can you add more details what you are trying to do and what did not worked?

– Shankar Koirala
Nov 21 '18 at 16:44












2 Answers
2






active

oldest

votes


















0














Each column name can be joined with value, and then all values can be joined in one column, separated by comma:



// data
val df = Seq(
("1E5", 1.000, 0.000, 0.000),
("1U2", 2.000, 5.000, 0.000),
("5G6", 3.000, 0.000, 10.000))
.toDF("A", "X", "Y", "Z")

// action
val columnsToConcat = List("X", "Y", "Z")
val columnNameValueList = columnsToConcat.map(c =>
when(col(c) =!= 0, concat(lit(c), lit(" "), col(c).cast(IntegerType)))
.otherwise("")
)
val valuesJoinedByComaColumn = columnNameValueList.reduce((a, b) =>
when(org.apache.spark.sql.functions.length(a) =!= 0 && org.apache.spark.sql.functions.length(b) =!= 0, concat(a, lit(", "), b))
.otherwise(concat(a, b))
)
val result = df.withColumn("D", valuesJoinedByComaColumn)
.drop(columnsToConcat: _*)


Output:



+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


Solution similar with proposed by stack0114106, but looks more explicit.






share|improve this answer
























  • Hey.. thanks for enhancing it.. the OP had mentioned that it didn't work for him..not sure what is the issue..

    – stack0114106
    Nov 22 '18 at 19:16



















-1














Check this out:



scala>  val df =  Seq(("1E5",1.000,0.000,0.000),("1U2",2.000,5.000,0.000),("5G6",3.000,0.000,10.000)).toDF("A","X","Y","Z")
df: org.apache.spark.sql.DataFrame = [A: string, X: double ... 2 more fields]

scala> df.show()
+---+---+---+----+
| A| X| Y| Z|
+---+---+---+----+
|1E5|1.0|0.0| 0.0|
|1U2|2.0|5.0| 0.0|
|5G6|3.0|0.0|10.0|
+---+---+---+----+

scala> val newcol = df.columns.drop(1).map( x=> when(col(x)===0,lit("")).otherwise(concat(lit(x),lit(" "),col(x).cast("int").cast("string"))) ).reduce( (x,y) => concat(x,lit(", "),y) )
newcol: org.apache.spark.sql.Column = concat(concat(CASE WHEN (X = 0) THEN ELSE concat(X, , CAST(CAST(X AS INT) AS STRING)) END, , , CASE WHEN (Y = 0) THEN ELSE concat(Y, , CAST(CAST(Y AS INT) AS STRING)) END), , , CASE WHEN (Z = 0) THEN ELSE concat(Z, , CAST(CAST(Z AS INT) AS STRING)) END)

scala> df.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('D,", ,",","),", $", "")).drop("X","Y","Z").show(false)
+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


scala>





share|improve this answer
























  • I am getting error: value withColumn is not a member of org.apache.spark.sql.Column newcol.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('Inventory_Status,", ,",","),", $", "")).drop("X","Y","Z").show(false)

    – Ravi Anand Vicky
    Nov 21 '18 at 19:32











  • import spark.implicits._ after the spark session statement..

    – stack0114106
    Nov 21 '18 at 19:35











  • does it work for you?

    – stack0114106
    Nov 22 '18 at 2:44











  • No, Its not working

    – Ravi Anand Vicky
    Nov 22 '18 at 6:30











  • which spark version you are using?

    – stack0114106
    Nov 22 '18 at 6:32











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53415866%2fi-want-to-calculate-using-three-columns-and-produce-single-column-with-showing-a%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














Each column name can be joined with value, and then all values can be joined in one column, separated by comma:



// data
val df = Seq(
("1E5", 1.000, 0.000, 0.000),
("1U2", 2.000, 5.000, 0.000),
("5G6", 3.000, 0.000, 10.000))
.toDF("A", "X", "Y", "Z")

// action
val columnsToConcat = List("X", "Y", "Z")
val columnNameValueList = columnsToConcat.map(c =>
when(col(c) =!= 0, concat(lit(c), lit(" "), col(c).cast(IntegerType)))
.otherwise("")
)
val valuesJoinedByComaColumn = columnNameValueList.reduce((a, b) =>
when(org.apache.spark.sql.functions.length(a) =!= 0 && org.apache.spark.sql.functions.length(b) =!= 0, concat(a, lit(", "), b))
.otherwise(concat(a, b))
)
val result = df.withColumn("D", valuesJoinedByComaColumn)
.drop(columnsToConcat: _*)


Output:



+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


Solution similar with proposed by stack0114106, but looks more explicit.






share|improve this answer
























  • Hey.. thanks for enhancing it.. the OP had mentioned that it didn't work for him..not sure what is the issue..

    – stack0114106
    Nov 22 '18 at 19:16
















0














Each column name can be joined with value, and then all values can be joined in one column, separated by comma:



// data
val df = Seq(
("1E5", 1.000, 0.000, 0.000),
("1U2", 2.000, 5.000, 0.000),
("5G6", 3.000, 0.000, 10.000))
.toDF("A", "X", "Y", "Z")

// action
val columnsToConcat = List("X", "Y", "Z")
val columnNameValueList = columnsToConcat.map(c =>
when(col(c) =!= 0, concat(lit(c), lit(" "), col(c).cast(IntegerType)))
.otherwise("")
)
val valuesJoinedByComaColumn = columnNameValueList.reduce((a, b) =>
when(org.apache.spark.sql.functions.length(a) =!= 0 && org.apache.spark.sql.functions.length(b) =!= 0, concat(a, lit(", "), b))
.otherwise(concat(a, b))
)
val result = df.withColumn("D", valuesJoinedByComaColumn)
.drop(columnsToConcat: _*)


Output:



+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


Solution similar with proposed by stack0114106, but looks more explicit.






share|improve this answer
























  • Hey.. thanks for enhancing it.. the OP had mentioned that it didn't work for him..not sure what is the issue..

    – stack0114106
    Nov 22 '18 at 19:16














0












0








0







Each column name can be joined with value, and then all values can be joined in one column, separated by comma:



// data
val df = Seq(
("1E5", 1.000, 0.000, 0.000),
("1U2", 2.000, 5.000, 0.000),
("5G6", 3.000, 0.000, 10.000))
.toDF("A", "X", "Y", "Z")

// action
val columnsToConcat = List("X", "Y", "Z")
val columnNameValueList = columnsToConcat.map(c =>
when(col(c) =!= 0, concat(lit(c), lit(" "), col(c).cast(IntegerType)))
.otherwise("")
)
val valuesJoinedByComaColumn = columnNameValueList.reduce((a, b) =>
when(org.apache.spark.sql.functions.length(a) =!= 0 && org.apache.spark.sql.functions.length(b) =!= 0, concat(a, lit(", "), b))
.otherwise(concat(a, b))
)
val result = df.withColumn("D", valuesJoinedByComaColumn)
.drop(columnsToConcat: _*)


Output:



+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


Solution similar with proposed by stack0114106, but looks more explicit.






share|improve this answer













Each column name can be joined with value, and then all values can be joined in one column, separated by comma:



// data
val df = Seq(
("1E5", 1.000, 0.000, 0.000),
("1U2", 2.000, 5.000, 0.000),
("5G6", 3.000, 0.000, 10.000))
.toDF("A", "X", "Y", "Z")

// action
val columnsToConcat = List("X", "Y", "Z")
val columnNameValueList = columnsToConcat.map(c =>
when(col(c) =!= 0, concat(lit(c), lit(" "), col(c).cast(IntegerType)))
.otherwise("")
)
val valuesJoinedByComaColumn = columnNameValueList.reduce((a, b) =>
when(org.apache.spark.sql.functions.length(a) =!= 0 && org.apache.spark.sql.functions.length(b) =!= 0, concat(a, lit(", "), b))
.otherwise(concat(a, b))
)
val result = df.withColumn("D", valuesJoinedByComaColumn)
.drop(columnsToConcat: _*)


Output:



+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


Solution similar with proposed by stack0114106, but looks more explicit.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 22 '18 at 12:29









pasha701pasha701

3,2801613




3,2801613













  • Hey.. thanks for enhancing it.. the OP had mentioned that it didn't work for him..not sure what is the issue..

    – stack0114106
    Nov 22 '18 at 19:16



















  • Hey.. thanks for enhancing it.. the OP had mentioned that it didn't work for him..not sure what is the issue..

    – stack0114106
    Nov 22 '18 at 19:16

















Hey.. thanks for enhancing it.. the OP had mentioned that it didn't work for him..not sure what is the issue..

– stack0114106
Nov 22 '18 at 19:16





Hey.. thanks for enhancing it.. the OP had mentioned that it didn't work for him..not sure what is the issue..

– stack0114106
Nov 22 '18 at 19:16













-1














Check this out:



scala>  val df =  Seq(("1E5",1.000,0.000,0.000),("1U2",2.000,5.000,0.000),("5G6",3.000,0.000,10.000)).toDF("A","X","Y","Z")
df: org.apache.spark.sql.DataFrame = [A: string, X: double ... 2 more fields]

scala> df.show()
+---+---+---+----+
| A| X| Y| Z|
+---+---+---+----+
|1E5|1.0|0.0| 0.0|
|1U2|2.0|5.0| 0.0|
|5G6|3.0|0.0|10.0|
+---+---+---+----+

scala> val newcol = df.columns.drop(1).map( x=> when(col(x)===0,lit("")).otherwise(concat(lit(x),lit(" "),col(x).cast("int").cast("string"))) ).reduce( (x,y) => concat(x,lit(", "),y) )
newcol: org.apache.spark.sql.Column = concat(concat(CASE WHEN (X = 0) THEN ELSE concat(X, , CAST(CAST(X AS INT) AS STRING)) END, , , CASE WHEN (Y = 0) THEN ELSE concat(Y, , CAST(CAST(Y AS INT) AS STRING)) END), , , CASE WHEN (Z = 0) THEN ELSE concat(Z, , CAST(CAST(Z AS INT) AS STRING)) END)

scala> df.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('D,", ,",","),", $", "")).drop("X","Y","Z").show(false)
+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


scala>





share|improve this answer
























  • I am getting error: value withColumn is not a member of org.apache.spark.sql.Column newcol.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('Inventory_Status,", ,",","),", $", "")).drop("X","Y","Z").show(false)

    – Ravi Anand Vicky
    Nov 21 '18 at 19:32











  • import spark.implicits._ after the spark session statement..

    – stack0114106
    Nov 21 '18 at 19:35











  • does it work for you?

    – stack0114106
    Nov 22 '18 at 2:44











  • No, Its not working

    – Ravi Anand Vicky
    Nov 22 '18 at 6:30











  • which spark version you are using?

    – stack0114106
    Nov 22 '18 at 6:32
















-1














Check this out:



scala>  val df =  Seq(("1E5",1.000,0.000,0.000),("1U2",2.000,5.000,0.000),("5G6",3.000,0.000,10.000)).toDF("A","X","Y","Z")
df: org.apache.spark.sql.DataFrame = [A: string, X: double ... 2 more fields]

scala> df.show()
+---+---+---+----+
| A| X| Y| Z|
+---+---+---+----+
|1E5|1.0|0.0| 0.0|
|1U2|2.0|5.0| 0.0|
|5G6|3.0|0.0|10.0|
+---+---+---+----+

scala> val newcol = df.columns.drop(1).map( x=> when(col(x)===0,lit("")).otherwise(concat(lit(x),lit(" "),col(x).cast("int").cast("string"))) ).reduce( (x,y) => concat(x,lit(", "),y) )
newcol: org.apache.spark.sql.Column = concat(concat(CASE WHEN (X = 0) THEN ELSE concat(X, , CAST(CAST(X AS INT) AS STRING)) END, , , CASE WHEN (Y = 0) THEN ELSE concat(Y, , CAST(CAST(Y AS INT) AS STRING)) END), , , CASE WHEN (Z = 0) THEN ELSE concat(Z, , CAST(CAST(Z AS INT) AS STRING)) END)

scala> df.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('D,", ,",","),", $", "")).drop("X","Y","Z").show(false)
+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


scala>





share|improve this answer
























  • I am getting error: value withColumn is not a member of org.apache.spark.sql.Column newcol.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('Inventory_Status,", ,",","),", $", "")).drop("X","Y","Z").show(false)

    – Ravi Anand Vicky
    Nov 21 '18 at 19:32











  • import spark.implicits._ after the spark session statement..

    – stack0114106
    Nov 21 '18 at 19:35











  • does it work for you?

    – stack0114106
    Nov 22 '18 at 2:44











  • No, Its not working

    – Ravi Anand Vicky
    Nov 22 '18 at 6:30











  • which spark version you are using?

    – stack0114106
    Nov 22 '18 at 6:32














-1












-1








-1







Check this out:



scala>  val df =  Seq(("1E5",1.000,0.000,0.000),("1U2",2.000,5.000,0.000),("5G6",3.000,0.000,10.000)).toDF("A","X","Y","Z")
df: org.apache.spark.sql.DataFrame = [A: string, X: double ... 2 more fields]

scala> df.show()
+---+---+---+----+
| A| X| Y| Z|
+---+---+---+----+
|1E5|1.0|0.0| 0.0|
|1U2|2.0|5.0| 0.0|
|5G6|3.0|0.0|10.0|
+---+---+---+----+

scala> val newcol = df.columns.drop(1).map( x=> when(col(x)===0,lit("")).otherwise(concat(lit(x),lit(" "),col(x).cast("int").cast("string"))) ).reduce( (x,y) => concat(x,lit(", "),y) )
newcol: org.apache.spark.sql.Column = concat(concat(CASE WHEN (X = 0) THEN ELSE concat(X, , CAST(CAST(X AS INT) AS STRING)) END, , , CASE WHEN (Y = 0) THEN ELSE concat(Y, , CAST(CAST(Y AS INT) AS STRING)) END), , , CASE WHEN (Z = 0) THEN ELSE concat(Z, , CAST(CAST(Z AS INT) AS STRING)) END)

scala> df.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('D,", ,",","),", $", "")).drop("X","Y","Z").show(false)
+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


scala>





share|improve this answer













Check this out:



scala>  val df =  Seq(("1E5",1.000,0.000,0.000),("1U2",2.000,5.000,0.000),("5G6",3.000,0.000,10.000)).toDF("A","X","Y","Z")
df: org.apache.spark.sql.DataFrame = [A: string, X: double ... 2 more fields]

scala> df.show()
+---+---+---+----+
| A| X| Y| Z|
+---+---+---+----+
|1E5|1.0|0.0| 0.0|
|1U2|2.0|5.0| 0.0|
|5G6|3.0|0.0|10.0|
+---+---+---+----+

scala> val newcol = df.columns.drop(1).map( x=> when(col(x)===0,lit("")).otherwise(concat(lit(x),lit(" "),col(x).cast("int").cast("string"))) ).reduce( (x,y) => concat(x,lit(", "),y) )
newcol: org.apache.spark.sql.Column = concat(concat(CASE WHEN (X = 0) THEN ELSE concat(X, , CAST(CAST(X AS INT) AS STRING)) END, , , CASE WHEN (Y = 0) THEN ELSE concat(Y, , CAST(CAST(Y AS INT) AS STRING)) END), , , CASE WHEN (Z = 0) THEN ELSE concat(Z, , CAST(CAST(Z AS INT) AS STRING)) END)

scala> df.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('D,", ,",","),", $", "")).drop("X","Y","Z").show(false)
+---+---------+
|A |D |
+---+---------+
|1E5|X 1 |
|1U2|X 2, Y 5 |
|5G6|X 3, Z 10|
+---+---------+


scala>






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 21 '18 at 18:29









stack0114106stack0114106

3,7052419




3,7052419













  • I am getting error: value withColumn is not a member of org.apache.spark.sql.Column newcol.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('Inventory_Status,", ,",","),", $", "")).drop("X","Y","Z").show(false)

    – Ravi Anand Vicky
    Nov 21 '18 at 19:32











  • import spark.implicits._ after the spark session statement..

    – stack0114106
    Nov 21 '18 at 19:35











  • does it work for you?

    – stack0114106
    Nov 22 '18 at 2:44











  • No, Its not working

    – Ravi Anand Vicky
    Nov 22 '18 at 6:30











  • which spark version you are using?

    – stack0114106
    Nov 22 '18 at 6:32



















  • I am getting error: value withColumn is not a member of org.apache.spark.sql.Column newcol.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('Inventory_Status,", ,",","),", $", "")).drop("X","Y","Z").show(false)

    – Ravi Anand Vicky
    Nov 21 '18 at 19:32











  • import spark.implicits._ after the spark session statement..

    – stack0114106
    Nov 21 '18 at 19:35











  • does it work for you?

    – stack0114106
    Nov 22 '18 at 2:44











  • No, Its not working

    – Ravi Anand Vicky
    Nov 22 '18 at 6:30











  • which spark version you are using?

    – stack0114106
    Nov 22 '18 at 6:32

















I am getting error: value withColumn is not a member of org.apache.spark.sql.Column newcol.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('Inventory_Status,", ,",","),", $", "")).drop("X","Y","Z").show(false)

– Ravi Anand Vicky
Nov 21 '18 at 19:32





I am getting error: value withColumn is not a member of org.apache.spark.sql.Column newcol.withColumn("D",newcol).withColumn("D",regexp_replace(regexp_replace('Inventory_Status,", ,",","),", $", "")).drop("X","Y","Z").show(false)

– Ravi Anand Vicky
Nov 21 '18 at 19:32













import spark.implicits._ after the spark session statement..

– stack0114106
Nov 21 '18 at 19:35





import spark.implicits._ after the spark session statement..

– stack0114106
Nov 21 '18 at 19:35













does it work for you?

– stack0114106
Nov 22 '18 at 2:44





does it work for you?

– stack0114106
Nov 22 '18 at 2:44













No, Its not working

– Ravi Anand Vicky
Nov 22 '18 at 6:30





No, Its not working

– Ravi Anand Vicky
Nov 22 '18 at 6:30













which spark version you are using?

– stack0114106
Nov 22 '18 at 6:32





which spark version you are using?

– stack0114106
Nov 22 '18 at 6:32


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53415866%2fi-want-to-calculate-using-three-columns-and-produce-single-column-with-showing-a%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

How to fix TextFormField cause rebuild widget in Flutter

Npm cannot find a required file even through it is in the searched directory