will processing 10+TB/2000 = 5G data. ReduceByKey will
create a hashtable of unique lines form this 5G data and keep it in memory.
it is maybe exceeed 16G .
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/AppMaster-OOME-on-YARN-tp12612p12627.html
Sent from
Hi,
I'm running Spark on YARN carrying out a simple reduceByKey followed by another
reduceByKey after some transformations. After completing the first stage my
Master runs out of memory.
I have 20G assigned to the master, 145 executors (12G each +4G overhead) ,
around 90k input files, 10+TB
. ReduceByKey will
create a hashtable of unique lines form this 5G data and keep it in memory.
it is maybe exceeed 16G .
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/AppMaster-OOME-on-YARN-tp12612p12627.html
Sent from the Apache Spark User List mailing list