Re: SparkDriver memory calculation mismatch

2016-11-12 Thread Elkhan Dadashov
In my particular case (to make Spark launching asynchronous), i launch Hadoop job, which consists of only 1 Spark job - which is launched via SparkLauncher#startApplication(). My App --- Launches Map task() --> into Cluster Map Tas

Re: SparkDriver memory calculation mismatch

2016-11-12 Thread Sean Owen
Indeed, you get default values if you don't specify concrete values otherwise. Yes, you should see the docs for the version you're using. Note that there are different configs for the new 'unified' memory manager since 1.6, and so some older resources may be correctly explaining the older 'legacy'

Re: SparkDriver memory calculation mismatch

2016-11-12 Thread Elkhan Dadashov
@Sean Owen, Thanks for your reply. I put the wrong link to the blog post. Here is the correct link which describes Spark Memory settings on Yarn. I guess they have misused the terms Spark driver/B

Re: SparkDriver memory calculation mismatch

2016-11-12 Thread Sean Owen
If you're pointing at the 336MB, then it's not really related any of the items you cite here. This is the memory managed internally by MemoryStore. The blog post refers to the legacy memory manager. You can see a bit of how it works in the code, but this is the sum of the on-heap and off-heap memor

SparkDriver memory calculation mismatch

2016-11-11 Thread Elkhan Dadashov
Hi, Spark website indicates default spark properties as like this: I did not override any properties in spark-defaults.conf file, but when I launch Spark in YarnClient mode: spark.driver.memory 1g spark.yarn.am.memory 512m spark.yarn.am.m