Why this Scala code is apparently not running on the Spark workers instead only on the Spark driver node?
I was using the code mentioned here to create an HashMap in Scala. Copy-pasting below for convenience:
def genList(xx: String) = {
Seq("one", "two", "three", "four")
}
val oriwords = Set("hello", "how", "are", "you")
val newMap = (Map[String, (String, Int)]() /: oriwords) (
(cmap, currentWord) => {
val xv = 2
genList(currentWord).foldLeft(cmap) {
(acc, ps) => {
val src = acc get ps
if (src == None) {
acc + (ps -> ((currentWord, xv)))
}
else {
if (src.get._2 < xv) {
acc + (ps -> ((currentWord, xv)))
}
else acc
}
}
}
}
)
println(newMap)
Note: The above code works for small oriwords
, however, it does not work when oriwords
is large. Apparantly because the computations are happening at the Spark driver node.
When I run, I get the the out of memory exception as follows:
WARN HeartbeatReceiver:66 - Removing executor driver with no recent heartbeats: 159099 ms exceeds timeout 120000 ms
Exception in thread "dispatcher-event-loop-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "dispatcher-event-loop-1"
Exception in thread "refresh progress" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
How to force the calculation to happen on the Spark cluster and save the generated HashMap on the Spark cluster itself instead of it being calculated and saved at the Spark driver node?
scala apache-spark
add a comment |
I was using the code mentioned here to create an HashMap in Scala. Copy-pasting below for convenience:
def genList(xx: String) = {
Seq("one", "two", "three", "four")
}
val oriwords = Set("hello", "how", "are", "you")
val newMap = (Map[String, (String, Int)]() /: oriwords) (
(cmap, currentWord) => {
val xv = 2
genList(currentWord).foldLeft(cmap) {
(acc, ps) => {
val src = acc get ps
if (src == None) {
acc + (ps -> ((currentWord, xv)))
}
else {
if (src.get._2 < xv) {
acc + (ps -> ((currentWord, xv)))
}
else acc
}
}
}
}
)
println(newMap)
Note: The above code works for small oriwords
, however, it does not work when oriwords
is large. Apparantly because the computations are happening at the Spark driver node.
When I run, I get the the out of memory exception as follows:
WARN HeartbeatReceiver:66 - Removing executor driver with no recent heartbeats: 159099 ms exceeds timeout 120000 ms
Exception in thread "dispatcher-event-loop-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "dispatcher-event-loop-1"
Exception in thread "refresh progress" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
How to force the calculation to happen on the Spark cluster and save the generated HashMap on the Spark cluster itself instead of it being calculated and saved at the Spark driver node?
scala apache-spark
add a comment |
I was using the code mentioned here to create an HashMap in Scala. Copy-pasting below for convenience:
def genList(xx: String) = {
Seq("one", "two", "three", "four")
}
val oriwords = Set("hello", "how", "are", "you")
val newMap = (Map[String, (String, Int)]() /: oriwords) (
(cmap, currentWord) => {
val xv = 2
genList(currentWord).foldLeft(cmap) {
(acc, ps) => {
val src = acc get ps
if (src == None) {
acc + (ps -> ((currentWord, xv)))
}
else {
if (src.get._2 < xv) {
acc + (ps -> ((currentWord, xv)))
}
else acc
}
}
}
}
)
println(newMap)
Note: The above code works for small oriwords
, however, it does not work when oriwords
is large. Apparantly because the computations are happening at the Spark driver node.
When I run, I get the the out of memory exception as follows:
WARN HeartbeatReceiver:66 - Removing executor driver with no recent heartbeats: 159099 ms exceeds timeout 120000 ms
Exception in thread "dispatcher-event-loop-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "dispatcher-event-loop-1"
Exception in thread "refresh progress" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
How to force the calculation to happen on the Spark cluster and save the generated HashMap on the Spark cluster itself instead of it being calculated and saved at the Spark driver node?
scala apache-spark
I was using the code mentioned here to create an HashMap in Scala. Copy-pasting below for convenience:
def genList(xx: String) = {
Seq("one", "two", "three", "four")
}
val oriwords = Set("hello", "how", "are", "you")
val newMap = (Map[String, (String, Int)]() /: oriwords) (
(cmap, currentWord) => {
val xv = 2
genList(currentWord).foldLeft(cmap) {
(acc, ps) => {
val src = acc get ps
if (src == None) {
acc + (ps -> ((currentWord, xv)))
}
else {
if (src.get._2 < xv) {
acc + (ps -> ((currentWord, xv)))
}
else acc
}
}
}
}
)
println(newMap)
Note: The above code works for small oriwords
, however, it does not work when oriwords
is large. Apparantly because the computations are happening at the Spark driver node.
When I run, I get the the out of memory exception as follows:
WARN HeartbeatReceiver:66 - Removing executor driver with no recent heartbeats: 159099 ms exceeds timeout 120000 ms
Exception in thread "dispatcher-event-loop-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "dispatcher-event-loop-1"
Exception in thread "refresh progress" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
How to force the calculation to happen on the Spark cluster and save the generated HashMap on the Spark cluster itself instead of it being calculated and saved at the Spark driver node?
scala apache-spark
scala apache-spark
edited Nov 19 '18 at 22:41


halfer
14.3k758109
14.3k758109
asked Nov 19 '18 at 12:35
user3243499
74811125
74811125
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
Things need to be in RDD
, Dataset
, Dataframe
et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map
and foreach
on one of those structures.
Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
– user3243499
Nov 19 '18 at 12:40
1
just on the driver. distribution only happens based on the above listed types not for scala std-libMap
– Dominic Egger
Nov 19 '18 at 12:42
So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
– user3243499
Nov 19 '18 at 12:44
1
well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
– Dominic Egger
Nov 19 '18 at 12:49
I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
– user3243499
Nov 19 '18 at 12:51
|
show 2 more comments
Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53374791%2fwhy-this-scala-code-is-apparently-not-running-on-the-spark-workers-instead-only%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Things need to be in RDD
, Dataset
, Dataframe
et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map
and foreach
on one of those structures.
Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
– user3243499
Nov 19 '18 at 12:40
1
just on the driver. distribution only happens based on the above listed types not for scala std-libMap
– Dominic Egger
Nov 19 '18 at 12:42
So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
– user3243499
Nov 19 '18 at 12:44
1
well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
– Dominic Egger
Nov 19 '18 at 12:49
I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
– user3243499
Nov 19 '18 at 12:51
|
show 2 more comments
Things need to be in RDD
, Dataset
, Dataframe
et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map
and foreach
on one of those structures.
Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
– user3243499
Nov 19 '18 at 12:40
1
just on the driver. distribution only happens based on the above listed types not for scala std-libMap
– Dominic Egger
Nov 19 '18 at 12:42
So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
– user3243499
Nov 19 '18 at 12:44
1
well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
– Dominic Egger
Nov 19 '18 at 12:49
I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
– user3243499
Nov 19 '18 at 12:51
|
show 2 more comments
Things need to be in RDD
, Dataset
, Dataframe
et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map
and foreach
on one of those structures.
Things need to be in RDD
, Dataset
, Dataframe
et al. for spark to distribute your computations. basically everything happens on the driver except for things that are in HoFs like map
and foreach
on one of those structures.
answered Nov 19 '18 at 12:38


Dominic Egger
67817
67817
Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
– user3243499
Nov 19 '18 at 12:40
1
just on the driver. distribution only happens based on the above listed types not for scala std-libMap
– Dominic Egger
Nov 19 '18 at 12:42
So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
– user3243499
Nov 19 '18 at 12:44
1
well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
– Dominic Egger
Nov 19 '18 at 12:49
I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
– user3243499
Nov 19 '18 at 12:51
|
show 2 more comments
Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
– user3243499
Nov 19 '18 at 12:40
1
just on the driver. distribution only happens based on the above listed types not for scala std-libMap
– Dominic Egger
Nov 19 '18 at 12:42
So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
– user3243499
Nov 19 '18 at 12:44
1
well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
– Dominic Egger
Nov 19 '18 at 12:49
I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
– user3243499
Nov 19 '18 at 12:51
Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
– user3243499
Nov 19 '18 at 12:40
Where is a Map get's stored? At the Spark driver or across the Spark cluster? Since, it is neither RDD, Dataset or Dataframe.
– user3243499
Nov 19 '18 at 12:40
1
1
just on the driver. distribution only happens based on the above listed types not for scala std-lib
Map
– Dominic Egger
Nov 19 '18 at 12:42
just on the driver. distribution only happens based on the above listed types not for scala std-lib
Map
– Dominic Egger
Nov 19 '18 at 12:42
So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
– user3243499
Nov 19 '18 at 12:44
So you are saying that we cannot have a distributed HashMap on Spark written on Scala?
– user3243499
Nov 19 '18 at 12:44
1
1
well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
– Dominic Egger
Nov 19 '18 at 12:49
well there are technologies that give you key-value based storage with fast random order lookup (e.g. redis or hbase). and you can certainly work with key-value RDDs. what do you want to do, what's the property of Map you want?
– Dominic Egger
Nov 19 '18 at 12:49
I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
– user3243499
Nov 19 '18 at 12:51
I am building this hashmap for spell check lookup. I am expecting it to be very big ~12*10^9 words.
– user3243499
Nov 19 '18 at 12:51
|
show 2 more comments
Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.
add a comment |
Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.
add a comment |
Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.
Spark uses the DataFrame and RDD abstractions to represent data. It doesn't use Scala Maps. So you need to wrap your data in an RDD or DataFrame (preferred option). Depending on the type of data you have there are different methods to load the data.
answered Nov 19 '18 at 12:39
Blokje5
873
873
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53374791%2fwhy-this-scala-code-is-apparently-not-running-on-the-spark-workers-instead-only%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown