Hi all,
I saw spark 1.6 has new off heap settings: spark.memory.offHeap.size
The doc said we need to shrink on heap size accordingly. But on Yarn on-heap
and yarn limit is set all together via spark.executor.memory (jvm opts for
memory is not allowed according to doc), how can we set executor
Hi,
Where did OOM happened?
In Driver or executor?
Sometimes SparkSQL Driver OOM on tables with large number partitions.
If so, you might want to increase it in spark-defaults.conf
spark.driver.memory
Shawn
On Jul 7, 2015, at 3:58 PM, shsh...@tsmc.com wrote:
Dear all,
We've tried to
Hi guys,
I was trying to deploy SparkSQL thrift server on Hadoop 2.5.2 with Kerberos /
Hive .13. It seems I got problem as below when I tried to start thrift server.
java.lang.NoSuchFieldError: SASL_PROPS
at