Re: [Spark Core][Advanced]: Wrong memory allocation on standalone mode cluster

2021-04-18 Thread Sean Owen
Are you sure about the worker mem configuration? what are you setting --memory too and what does the worker UI think its memory allocation is? On Sun, Apr 18, 2021 at 4:08 AM Mohamadreza Rostami < mohamadrezarosta...@gmail.com> wrote: > I see a bug in executer memory allocation in the s

[Spark Core][Advanced]: Wrong memory allocation on standalone mode cluster

2021-04-18 Thread Mohamadreza Rostami
I see a bug in executer memory allocation in the standalone cluster, but I can't find which part of the spark code causes this problem. That why's I decided to raise this issue here. Assume you have 3 workers with 10 CPU cores and 10 Gigabyte memories. Assume also you have 2 spark jobs that run

Re: Memory allocation

2020-04-17 Thread Muhib Khan
spark.executor.memory and spark.driver.memory specifies the size of the JVM heap for the executor and the driver respectively. You can understand a bit more about memory usage from here . On Fri, Apr 17, 2020 at 4:07 PM

Memory allocation

2020-04-17 Thread Pat Ferrel
I have used Spark for several years and realize from recent chatter on this list that I don’t really understand how it uses memory. Specifically is spark.executor.memory and spark.driver.memory taken from the JVM heap when does Spark take memory from JVM heap and when it is from off JVM heap.

--driver-memory allocation question

2018-04-20 Thread klrmowse
newb question... say, memory per node is 16GB for 6 nodes (for a total of 96GB for the cluster) is 16GB the max amount of memory that can be allocated to driver? (since, it is, after all, 16GB per node) Thanks -- Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

Spark Memory Allocation Exception

2016-09-09 Thread Sunil Tripathy
. org.apache.hadoop.hive.ql.metadata.HiveException: parquet.hadoop.MemoryManager$1: New Memory allocation 1047552 bytes is smaller than the minimum allocation size of 1048576 bytes. The input data size is around 350 GB and we have around 145 nodes with 384GB on each node. Any pointers to resolve the issue will be appreciated. Thanks

Re: Memory allocation for Broadcast values

2015-12-25 Thread Chris Fregly
Note that starting with Spark 1.6, memory can be dynamically allocated by the Spark execution engine based on workload heuristics. You can still set a low watermark for the spark.storage.memoryFraction (RDD cache), but the rest can be dynamic. Here's some relevant slides from a recent

Re: Memory allocation for Broadcast values

2015-12-22 Thread Akhil Das
If you are creating a huge map on the driver, then spark.driver.memory should be set to a higher value to hold your map. Since you are going to broadcast this map, your spark executors must have enough memory to hold this map as well which can be set using the spark.executor.memory, and

Memory allocation for Broadcast values

2015-12-20 Thread Pat Ferrel
I have a large Map that is assembled in the driver and broadcast to each node. My question is how best to allocate memory for this. The Driver has to have enough memory for the Maps, but only one copy is serialized to each node. What type of memory should I size to match the Maps? Is the

Re: Memory allocation error with Spark 1.5

2015-08-06 Thread Alexis Seigneurin
Works like a charm. Thanks Reynold for the quick and efficient response! Alexis 2015-08-05 19:19 GMT+02:00 Reynold Xin r...@databricks.com: In Spark 1.5, we have a new way to manage memory (part of Project Tungsten). The default unit of memory allocation is 64MB, which is way too high when

Memory allocation error with Spark 1.5

2015-08-05 Thread Alexis Seigneurin
Hi, I'm receiving a memory allocation error with a recent build of Spark 1.5: java.io.IOException: Unable to acquire 67108864 bytes of memory at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:348

Re: Memory allocation error with Spark 1.5

2015-08-05 Thread Reynold Xin
In Spark 1.5, we have a new way to manage memory (part of Project Tungsten). The default unit of memory allocation is 64MB, which is way too high when you have 1G of memory allocated in total and have more than 4 threads. We will reduce the default page size before releasing 1.5. For now, you

Re: How does one decide no of executors/cores/memory allocation?

2015-06-17 Thread nsalian
://apache-spark-user-list.1001560.n3.nabble.com/How-does-one-decide-no-of-executors-cores-memory-allocation-tp23326p23369.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user

Re: How does one decide no of executors/cores/memory allocation?

2015-06-16 Thread shreesh
in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-does-one-decide-no-of-executors-cores-memory-allocation-tp23326p23339.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e

RE: How does one decide no of executors/cores/memory allocation?

2015-06-16 Thread Evo Eftimov
Subject: Re: How does one decide no of executors/cores/memory allocation? I realize that there are a lot of ways to configure my application in spark. The part that is not clear is that how do I decide say for example in how many partitions should I divide my data or how much ram should I have or how

Re: How does one decide no of executors/cores/memory allocation?

2015-06-16 Thread Himanshu Mehra
and the number of workers to start by initializing SPARK_EXECUTOR_INSTANCES in the spark_home/conf/spark-env.sh file. Thanks Himanshu -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-does-one-decide-no-of-executors-cores-memory-allocation-tp23326p23330

How does one decide no of executors/cores/memory allocation?

2015-06-15 Thread shreesh
-no-of-executors-cores-memory-allocation-tp23326.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h

Re: How does one decide no of executors/cores/memory allocation?

2015-06-15 Thread gaurav sharma
://apache-spark-user-list.1001560.n3.nabble.com/How-does-one-decide-no-of-executors-cores-memory-allocation-tp23326.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user

SparkContext.getCallSite is in the top of profiler by memory allocation

2015-04-30 Thread Igor Petrov
dont need this information and we want to improve performance by disabling CallSite. Thank You -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SparkContext-getCallSite-is-in-the-top-of-profiler-by-memory-allocation-tp22716.html Sent from the Apache Spark

Re: Spark on YARN driver memory allocation bug?

2014-10-17 Thread Boduo Li
-on-YARN-driver-memory-allocation-bug-tp15961p16721.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h

Re: Spark on YARN driver memory allocation bug?

2014-10-09 Thread Greg Hill
memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 4342677504 bytes for committing reserved memory. # An error report file with more information is saved as: # /tmp/jvm-3525/hs_error.log From

Re: Spark on YARN driver memory allocation bug?

2014-10-09 Thread Sandy Ryza
/spark-yarn/lib/spark-examples*.jar 1000 OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0006fd28, 4342677504, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc

Spark on YARN driver memory allocation bug?

2014-10-08 Thread Greg Hill
So, I think this is a bug, but I wanted to get some feedback before I reported it as such. On Spark on YARN, 1.1.0, if you specify the --driver-memory value to be higher than the memory available on the client machine, Spark errors out due to failing to allocate enough memory. This happens

Re: Spark on YARN driver memory allocation bug?

2014-10-08 Thread Andrew Or
Hi Greg, It does seem like a bug. What is the particular exception message that you see? Andrew 2014-10-08 12:12 GMT-07:00 Greg Hill greg.h...@rackspace.com: So, I think this is a bug, but I wanted to get some feedback before I reported it as such. On Spark on YARN, 1.1.0, if you specify