was your
reasoning for using 24G on the worker?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/shuffle-memory-requirements-tp4048p15375.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/shuffle-memory-requirements-tp4048p15375.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail
Turns out that my ulimit settings were too low. I bumped up and the job
successfully completes. Here's what I have now:
$ ulimit -u // for max user processes
81920
$ ulimit -n // for open files
81920
I was thrown off by the OutOfMemoryError into thinking it is Spark running
out of memory in
A typo - I mean't section 2.1.2.5 ulimit and nproc of
https://hbase.apache.org/book.html
Ameet
On Fri, Apr 11, 2014 at 10:32 AM, Ameet Kini ameetk...@gmail.com wrote:
Turns out that my ulimit settings were too low. I bumped up and the job
successfully completes. Here's what I have now:
$