Re: OutOfMemory error with Spark ML 1.5 logreg example

2015-09-09 Thread Tóth Zoltán
Thanks Zoltan.

So far I got to a full repro which works both in docker and on a bigger
real-world cluster. Also, the whole thing only happens in `cluster` mode.
I issued a ticket for it.
Any thoughts?

https://issues.apache.org/jira/browse/SPARK-10487


On Mon, Sep 7, 2015 at 7:59 PM, Zsolt Tóth  wrote:

> Hi,
>
> I ran your example on Spark-1.4.1 and 1.5.0-rc3. It succeeds on 1.4.1 but
> throws the  OOM on 1.5.0.  Do any of you know which PR introduced this
> issue?
>
> Zsolt
>
>
> 2015-09-07 16:33 GMT+02:00 Zoltán Zvara :
>
>> Hey, I'd try to debug, profile ResolvedDataSource. As far as I know, your
>> write will be performed by the JVM.
>>
>> On Mon, Sep 7, 2015 at 4:11 PM Tóth Zoltán  wrote:
>>
>>> Unfortunately I'm getting the same error:
>>> The other interesting things are that:
>>>  - the parquet files got actually written to HDFS (also with
>>> .write.parquet() )
>>>  - the application gets stuck in the RUNNING state for good even after
>>> the error is thrown
>>>
>>> 15/09/07 10:01:10 INFO spark.ContextCleaner: Cleaned accumulator 19
>>> 15/09/07 10:01:10 INFO spark.ContextCleaner: Cleaned accumulator 5
>>> 15/09/07 10:01:12 INFO spark.ContextCleaner: Cleaned accumulator 20
>>> Exception in thread "Thread-7"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread "Thread-7"
>>> Exception in thread "org.apache.hadoop.hdfs.PeerCache@4070d501"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread 
>>> "org.apache.hadoop.hdfs.PeerCache@4070d501"
>>> Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread 
>>> "LeaseRenewer:r...@docker.rapidminer.com:8020"
>>> Exception in thread "Reporter"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread "Reporter"
>>> Exception in thread "qtp2134582502-46"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread "qtp2134582502-46"
>>>
>>>
>>>
>>>
>>> On Mon, Sep 7, 2015 at 3:48 PM, boci  wrote:
>>>
 Hi,

 Can you try to using save method instead of write?

 ex: out_df.save("path","parquet")

 b0c1


 --
 Skype: boci13, Hangout: boci.b...@gmail.com

 On Mon, Sep 7, 2015 at 3:35 PM, Zoltán Tóth 
 wrote:

> Aaand, the error! :)
>
> Exception in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread 
> "org.apache.hadoop.hdfs.PeerCache@4e000abf"
> Exception in thread "Thread-7"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "Thread-7"
> Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread 
> "LeaseRenewer:r...@docker.rapidminer.com:8020"
> Exception in thread "Reporter"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "Reporter"
> Exception in thread "qtp2115718813-47"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "qtp2115718813-47"
>
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "sparkDriver-scheduler-1"
>
> Log Type: stdout
>
> Log Upload Time: Mon Sep 07 09:03:01 -0400 2015
>
> Log Length: 986
>
> Traceback (most recent call last):
>   File "spark-ml.py", line 33, in 
> out_df.write.parquet("/tmp/logparquet")
>   File 
> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/readwriter.py",
>  line 422, in parquet
>   File 
> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
>  line 538, in __call__
>   File 
> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/utils.py",
>  line 36, in deco
>   File 
> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/protocol.py",
>  line 300, in get_return_value
> py4j.protocol.Py4JJavaError
>
>
>
> On Mon, Sep 7, 2015 at 3:27 PM, Zoltán Tóth 
> wrote:
>
>> Hi,
>>
>> When I execute the Spark ML Logisitc Regre

Re: OutOfMemory error with Spark ML 1.5 logreg example

2015-09-07 Thread Zsolt Tóth
Hi,

I ran your example on Spark-1.4.1 and 1.5.0-rc3. It succeeds on 1.4.1 but
throws the  OOM on 1.5.0.  Do any of you know which PR introduced this
issue?

Zsolt


2015-09-07 16:33 GMT+02:00 Zoltán Zvara :

> Hey, I'd try to debug, profile ResolvedDataSource. As far as I know, your
> write will be performed by the JVM.
>
> On Mon, Sep 7, 2015 at 4:11 PM Tóth Zoltán  wrote:
>
>> Unfortunately I'm getting the same error:
>> The other interesting things are that:
>>  - the parquet files got actually written to HDFS (also with
>> .write.parquet() )
>>  - the application gets stuck in the RUNNING state for good even after
>> the error is thrown
>>
>> 15/09/07 10:01:10 INFO spark.ContextCleaner: Cleaned accumulator 19
>> 15/09/07 10:01:10 INFO spark.ContextCleaner: Cleaned accumulator 5
>> 15/09/07 10:01:12 INFO spark.ContextCleaner: Cleaned accumulator 20
>> Exception in thread "Thread-7"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread "Thread-7"
>> Exception in thread "org.apache.hadoop.hdfs.PeerCache@4070d501"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread 
>> "org.apache.hadoop.hdfs.PeerCache@4070d501"
>> Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread 
>> "LeaseRenewer:r...@docker.rapidminer.com:8020"
>> Exception in thread "Reporter"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread "Reporter"
>> Exception in thread "qtp2134582502-46"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread "qtp2134582502-46"
>>
>>
>>
>>
>> On Mon, Sep 7, 2015 at 3:48 PM, boci  wrote:
>>
>>> Hi,
>>>
>>> Can you try to using save method instead of write?
>>>
>>> ex: out_df.save("path","parquet")
>>>
>>> b0c1
>>>
>>>
>>> --
>>> Skype: boci13, Hangout: boci.b...@gmail.com
>>>
>>> On Mon, Sep 7, 2015 at 3:35 PM, Zoltán Tóth 
>>> wrote:
>>>
 Aaand, the error! :)

 Exception in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
 Exception: java.lang.OutOfMemoryError thrown from the 
 UncaughtExceptionHandler in thread 
 "org.apache.hadoop.hdfs.PeerCache@4e000abf"
 Exception in thread "Thread-7"
 Exception: java.lang.OutOfMemoryError thrown from the 
 UncaughtExceptionHandler in thread "Thread-7"
 Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
 Exception: java.lang.OutOfMemoryError thrown from the 
 UncaughtExceptionHandler in thread 
 "LeaseRenewer:r...@docker.rapidminer.com:8020"
 Exception in thread "Reporter"
 Exception: java.lang.OutOfMemoryError thrown from the 
 UncaughtExceptionHandler in thread "Reporter"
 Exception in thread "qtp2115718813-47"
 Exception: java.lang.OutOfMemoryError thrown from the 
 UncaughtExceptionHandler in thread "qtp2115718813-47"

 Exception: java.lang.OutOfMemoryError thrown from the 
 UncaughtExceptionHandler in thread "sparkDriver-scheduler-1"

 Log Type: stdout

 Log Upload Time: Mon Sep 07 09:03:01 -0400 2015

 Log Length: 986

 Traceback (most recent call last):
   File "spark-ml.py", line 33, in 
 out_df.write.parquet("/tmp/logparquet")
   File 
 "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/readwriter.py",
  line 422, in parquet
   File 
 "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
  line 538, in __call__
   File 
 "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/utils.py",
  line 36, in deco
   File 
 "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/protocol.py",
  line 300, in get_return_value
 py4j.protocol.Py4JJavaError



 On Mon, Sep 7, 2015 at 3:27 PM, Zoltán Tóth 
 wrote:

> Hi,
>
> When I execute the Spark ML Logisitc Regression example in pyspark I
> run into an OutOfMemory exception. I'm wondering if any of you experienced
> the same or has a hint about how to fix this.
>
> The interesting bit is that I only get the exception when I try to
> write the result DataFrame into a file. If I only "print" any of the
> results, it all works fine.
>
> My Setup:
> Spark 1.5.0-SNAPSHOT built for Hadoop 2.6.0 (I'm workin

Re: OutOfMemory error with Spark ML 1.5 logreg example

2015-09-07 Thread Zoltán Zvara
Hey, I'd try to debug, profile ResolvedDataSource. As far as I know, your
write will be performed by the JVM.

On Mon, Sep 7, 2015 at 4:11 PM Tóth Zoltán  wrote:

> Unfortunately I'm getting the same error:
> The other interesting things are that:
>  - the parquet files got actually written to HDFS (also with
> .write.parquet() )
>  - the application gets stuck in the RUNNING state for good even after the
> error is thrown
>
> 15/09/07 10:01:10 INFO spark.ContextCleaner: Cleaned accumulator 19
> 15/09/07 10:01:10 INFO spark.ContextCleaner: Cleaned accumulator 5
> 15/09/07 10:01:12 INFO spark.ContextCleaner: Cleaned accumulator 20
> Exception in thread "Thread-7"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "Thread-7"
> Exception in thread "org.apache.hadoop.hdfs.PeerCache@4070d501"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "org.apache.hadoop.hdfs.PeerCache@4070d501"
> Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread 
> "LeaseRenewer:r...@docker.rapidminer.com:8020"
> Exception in thread "Reporter"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "Reporter"
> Exception in thread "qtp2134582502-46"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "qtp2134582502-46"
>
>
>
>
> On Mon, Sep 7, 2015 at 3:48 PM, boci  wrote:
>
>> Hi,
>>
>> Can you try to using save method instead of write?
>>
>> ex: out_df.save("path","parquet")
>>
>> b0c1
>>
>>
>> --
>> Skype: boci13, Hangout: boci.b...@gmail.com
>>
>> On Mon, Sep 7, 2015 at 3:35 PM, Zoltán Tóth 
>> wrote:
>>
>>> Aaand, the error! :)
>>>
>>> Exception in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread 
>>> "org.apache.hadoop.hdfs.PeerCache@4e000abf"
>>> Exception in thread "Thread-7"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread "Thread-7"
>>> Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread 
>>> "LeaseRenewer:r...@docker.rapidminer.com:8020"
>>> Exception in thread "Reporter"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread "Reporter"
>>> Exception in thread "qtp2115718813-47"
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread "qtp2115718813-47"
>>>
>>> Exception: java.lang.OutOfMemoryError thrown from the 
>>> UncaughtExceptionHandler in thread "sparkDriver-scheduler-1"
>>>
>>> Log Type: stdout
>>>
>>> Log Upload Time: Mon Sep 07 09:03:01 -0400 2015
>>>
>>> Log Length: 986
>>>
>>> Traceback (most recent call last):
>>>   File "spark-ml.py", line 33, in 
>>> out_df.write.parquet("/tmp/logparquet")
>>>   File 
>>> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/readwriter.py",
>>>  line 422, in parquet
>>>   File 
>>> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
>>>  line 538, in __call__
>>>   File 
>>> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/utils.py",
>>>  line 36, in deco
>>>   File 
>>> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/protocol.py",
>>>  line 300, in get_return_value
>>> py4j.protocol.Py4JJavaError
>>>
>>>
>>>
>>> On Mon, Sep 7, 2015 at 3:27 PM, Zoltán Tóth 
>>> wrote:
>>>
 Hi,

 When I execute the Spark ML Logisitc Regression example in pyspark I
 run into an OutOfMemory exception. I'm wondering if any of you experienced
 the same or has a hint about how to fix this.

 The interesting bit is that I only get the exception when I try to
 write the result DataFrame into a file. If I only "print" any of the
 results, it all works fine.

 My Setup:
 Spark 1.5.0-SNAPSHOT built for Hadoop 2.6.0 (I'm working with the
 latest nightly build)
 Build flags: -Psparkr -Phadoop-2.6 -Phive -Phive-thriftserver -Pyarn
 -DzincPort=3034

 I'm using the default resource setup
 15/09/07 08:49:04 INFO yarn.YarnAllocator: Will request 2 executor
 containers, each with 1 cores and 1408 MB memory including 384 MB overhead

Re: OutOfMemory error with Spark ML 1.5 logreg example

2015-09-07 Thread Tóth Zoltán
Unfortunately I'm getting the same error:
The other interesting things are that:
 - the parquet files got actually written to HDFS (also with
.write.parquet() )
 - the application gets stuck in the RUNNING state for good even after the
error is thrown

15/09/07 10:01:10 INFO spark.ContextCleaner: Cleaned accumulator 19
15/09/07 10:01:10 INFO spark.ContextCleaner: Cleaned accumulator 5
15/09/07 10:01:12 INFO spark.ContextCleaner: Cleaned accumulator 20
Exception in thread "Thread-7"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "Thread-7"
Exception in thread "org.apache.hadoop.hdfs.PeerCache@4070d501"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread
"org.apache.hadoop.hdfs.PeerCache@4070d501"
Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread
"LeaseRenewer:r...@docker.rapidminer.com:8020"
Exception in thread "Reporter"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "Reporter"
Exception in thread "qtp2134582502-46"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "qtp2134582502-46"




On Mon, Sep 7, 2015 at 3:48 PM, boci  wrote:

> Hi,
>
> Can you try to using save method instead of write?
>
> ex: out_df.save("path","parquet")
>
> b0c1
>
>
> --
> Skype: boci13, Hangout: boci.b...@gmail.com
>
> On Mon, Sep 7, 2015 at 3:35 PM, Zoltán Tóth  wrote:
>
>> Aaand, the error! :)
>>
>> Exception in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread 
>> "org.apache.hadoop.hdfs.PeerCache@4e000abf"
>> Exception in thread "Thread-7"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread "Thread-7"
>> Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread 
>> "LeaseRenewer:r...@docker.rapidminer.com:8020"
>> Exception in thread "Reporter"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread "Reporter"
>> Exception in thread "qtp2115718813-47"
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread "qtp2115718813-47"
>>
>> Exception: java.lang.OutOfMemoryError thrown from the 
>> UncaughtExceptionHandler in thread "sparkDriver-scheduler-1"
>>
>> Log Type: stdout
>>
>> Log Upload Time: Mon Sep 07 09:03:01 -0400 2015
>>
>> Log Length: 986
>>
>> Traceback (most recent call last):
>>   File "spark-ml.py", line 33, in 
>> out_df.write.parquet("/tmp/logparquet")
>>   File 
>> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/readwriter.py",
>>  line 422, in parquet
>>   File 
>> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
>>  line 538, in __call__
>>   File 
>> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/utils.py",
>>  line 36, in deco
>>   File 
>> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/protocol.py",
>>  line 300, in get_return_value
>> py4j.protocol.Py4JJavaError
>>
>>
>>
>> On Mon, Sep 7, 2015 at 3:27 PM, Zoltán Tóth 
>> wrote:
>>
>>> Hi,
>>>
>>> When I execute the Spark ML Logisitc Regression example in pyspark I run
>>> into an OutOfMemory exception. I'm wondering if any of you experienced the
>>> same or has a hint about how to fix this.
>>>
>>> The interesting bit is that I only get the exception when I try to write
>>> the result DataFrame into a file. If I only "print" any of the results, it
>>> all works fine.
>>>
>>> My Setup:
>>> Spark 1.5.0-SNAPSHOT built for Hadoop 2.6.0 (I'm working with the latest
>>> nightly build)
>>> Build flags: -Psparkr -Phadoop-2.6 -Phive -Phive-thriftserver -Pyarn
>>> -DzincPort=3034
>>>
>>> I'm using the default resource setup
>>> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Will request 2 executor
>>> containers, each with 1 cores and 1408 MB memory including 384 MB overhead
>>> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Container request (host: Any,
>>> capability: )
>>> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Container request (host: Any,
>>> capability: )
>>>
>>> The script I'm executing:
>>> from pyspark import SparkContext, SparkConf
>>> from pyspark.sql import SQLContext
>>

Re: OutOfMemory error with Spark ML 1.5 logreg example

2015-09-07 Thread boci
Hi,

Can you try to using save method instead of write?

ex: out_df.save("path","parquet")

b0c1

--
Skype: boci13, Hangout: boci.b...@gmail.com

On Mon, Sep 7, 2015 at 3:35 PM, Zoltán Tóth  wrote:

> Aaand, the error! :)
>
> Exception in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
> Exception in thread "Thread-7"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "Thread-7"
> Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread 
> "LeaseRenewer:r...@docker.rapidminer.com:8020"
> Exception in thread "Reporter"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "Reporter"
> Exception in thread "qtp2115718813-47"
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "qtp2115718813-47"
>
> Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "sparkDriver-scheduler-1"
>
> Log Type: stdout
>
> Log Upload Time: Mon Sep 07 09:03:01 -0400 2015
>
> Log Length: 986
>
> Traceback (most recent call last):
>   File "spark-ml.py", line 33, in 
> out_df.write.parquet("/tmp/logparquet")
>   File 
> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/readwriter.py",
>  line 422, in parquet
>   File 
> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
>  line 538, in __call__
>   File 
> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/utils.py",
>  line 36, in deco
>   File 
> "/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/protocol.py",
>  line 300, in get_return_value
> py4j.protocol.Py4JJavaError
>
>
>
> On Mon, Sep 7, 2015 at 3:27 PM, Zoltán Tóth  wrote:
>
>> Hi,
>>
>> When I execute the Spark ML Logisitc Regression example in pyspark I run
>> into an OutOfMemory exception. I'm wondering if any of you experienced the
>> same or has a hint about how to fix this.
>>
>> The interesting bit is that I only get the exception when I try to write
>> the result DataFrame into a file. If I only "print" any of the results, it
>> all works fine.
>>
>> My Setup:
>> Spark 1.5.0-SNAPSHOT built for Hadoop 2.6.0 (I'm working with the latest
>> nightly build)
>> Build flags: -Psparkr -Phadoop-2.6 -Phive -Phive-thriftserver -Pyarn
>> -DzincPort=3034
>>
>> I'm using the default resource setup
>> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Will request 2 executor
>> containers, each with 1 cores and 1408 MB memory including 384 MB overhead
>> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Container request (host: Any,
>> capability: )
>> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Container request (host: Any,
>> capability: )
>>
>> The script I'm executing:
>> from pyspark import SparkContext, SparkConf
>> from pyspark.sql import SQLContext
>>
>> conf = SparkConf().setAppName("pysparktest")
>> sc = SparkContext(conf=conf)
>> sqlContext = SQLContext(sc)
>>
>> from pyspark.mllib.regression import LabeledPoint
>> from pyspark.mllib.linalg import Vector, Vectors
>>
>> training = sc.parallelize((
>>   LabeledPoint(1.0, Vectors.dense(0.0, 1.1, 0.1)),
>>   LabeledPoint(0.0, Vectors.dense(2.0, 1.0, -1.0)),
>>   LabeledPoint(0.0, Vectors.dense(2.0, 1.3, 1.0)),
>>   LabeledPoint(1.0, Vectors.dense(0.0, 1.2, -0.5
>>
>> training_df = training.toDF()
>>
>> from pyspark.ml.classification import LogisticRegression
>>
>> reg = LogisticRegression()
>>
>> reg.setMaxIter(10).setRegParam(0.01)
>> model = reg.fit(training.toDF())
>>
>> test = sc.parallelize((
>>   LabeledPoint(1.0, Vectors.dense(-1.0, 1.5, 1.3)),
>>   LabeledPoint(0.0, Vectors.dense(3.0, 2.0, -0.1)),
>>   LabeledPoint(1.0, Vectors.dense(0.0, 2.2, -1.5
>>
>> out_df = model.transform(test.toDF())
>>
>> out_df.write.parquet("/tmp/logparquet")
>>
>> And the command:
>> spark-submit --master yarn --deploy-mode cluster spark-ml.py
>>
>> Thanks,
>> z
>>
>
>


Re: OutOfMemory error with Spark ML 1.5 logreg example

2015-09-07 Thread Zoltán Tóth
Aaand, the error! :)

Exception in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread
"org.apache.hadoop.hdfs.PeerCache@4e000abf"
Exception in thread "Thread-7"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "Thread-7"
Exception in thread "LeaseRenewer:r...@docker.rapidminer.com:8020"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread
"LeaseRenewer:r...@docker.rapidminer.com:8020"
Exception in thread "Reporter"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "Reporter"
Exception in thread "qtp2115718813-47"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "qtp2115718813-47"

Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "sparkDriver-scheduler-1"

Log Type: stdout

Log Upload Time: Mon Sep 07 09:03:01 -0400 2015

Log Length: 986

Traceback (most recent call last):
  File "spark-ml.py", line 33, in 
out_df.write.parquet("/tmp/logparquet")
  File 
"/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/readwriter.py",
line 422, in parquet
  File 
"/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
line 538, in __call__
  File 
"/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/pyspark.zip/pyspark/sql/utils.py",
line 36, in deco
  File 
"/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/root/appcache/application_1441224592929_0022/container_1441224592929_0022_01_01/py4j-0.8.2.1-src.zip/py4j/protocol.py",
line 300, in get_return_value
py4j.protocol.Py4JJavaError



On Mon, Sep 7, 2015 at 3:27 PM, Zoltán Tóth  wrote:

> Hi,
>
> When I execute the Spark ML Logisitc Regression example in pyspark I run
> into an OutOfMemory exception. I'm wondering if any of you experienced the
> same or has a hint about how to fix this.
>
> The interesting bit is that I only get the exception when I try to write
> the result DataFrame into a file. If I only "print" any of the results, it
> all works fine.
>
> My Setup:
> Spark 1.5.0-SNAPSHOT built for Hadoop 2.6.0 (I'm working with the latest
> nightly build)
> Build flags: -Psparkr -Phadoop-2.6 -Phive -Phive-thriftserver -Pyarn
> -DzincPort=3034
>
> I'm using the default resource setup
> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Will request 2 executor
> containers, each with 1 cores and 1408 MB memory including 384 MB overhead
> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Container request (host: Any,
> capability: )
> 15/09/07 08:49:04 INFO yarn.YarnAllocator: Container request (host: Any,
> capability: )
>
> The script I'm executing:
> from pyspark import SparkContext, SparkConf
> from pyspark.sql import SQLContext
>
> conf = SparkConf().setAppName("pysparktest")
> sc = SparkContext(conf=conf)
> sqlContext = SQLContext(sc)
>
> from pyspark.mllib.regression import LabeledPoint
> from pyspark.mllib.linalg import Vector, Vectors
>
> training = sc.parallelize((
>   LabeledPoint(1.0, Vectors.dense(0.0, 1.1, 0.1)),
>   LabeledPoint(0.0, Vectors.dense(2.0, 1.0, -1.0)),
>   LabeledPoint(0.0, Vectors.dense(2.0, 1.3, 1.0)),
>   LabeledPoint(1.0, Vectors.dense(0.0, 1.2, -0.5
>
> training_df = training.toDF()
>
> from pyspark.ml.classification import LogisticRegression
>
> reg = LogisticRegression()
>
> reg.setMaxIter(10).setRegParam(0.01)
> model = reg.fit(training.toDF())
>
> test = sc.parallelize((
>   LabeledPoint(1.0, Vectors.dense(-1.0, 1.5, 1.3)),
>   LabeledPoint(0.0, Vectors.dense(3.0, 2.0, -0.1)),
>   LabeledPoint(1.0, Vectors.dense(0.0, 2.2, -1.5
>
> out_df = model.transform(test.toDF())
>
> out_df.write.parquet("/tmp/logparquet")
>
> And the command:
> spark-submit --master yarn --deploy-mode cluster spark-ml.py
>
> Thanks,
> z
>


Re: OutOfMemory error in Spark Core

2015-01-15 Thread Akhil Das
Did you try increasing the parallelism?

Thanks
Best Regards

On Fri, Jan 16, 2015 at 10:41 AM, Anand Mohan  wrote:

> We have our Analytics App built on Spark 1.1 Core, Parquet, Avro and Spray.
> We are using Kryo serializer for the Avro objects read from Parquet and we
> are using our custom Kryo registrator (along the lines of  ADAM
> <
> https://github.com/bigdatagenomics/adam/blob/master/adam-core/src/main/scala/org/bdgenomics/adam/serialization/ADAMKryoRegistrator.scala#L51
> >
> , we just added batched writes and flushes to Kryo's Output for each 512 MB
> in the stream, as below
> outstream.array.sliding(512MB).foreach(buf => {
>   kryoOut.write(buf)
>   kryoOut.flush()
> })
> )
>
> Our queries are done to a cached RDD(MEMORY_ONLY), that is obtained after
> 1. loading bulk data from Parquet
> 2. union-ing it with incremental data in Avro
> 3. doing timestamp based duplicate removal (including partitioning in
> reduceByKey) and
> 4. joining a couple of MySQL tables using JdbcRdd
>
> Of late, we are seeing major instabilities where the app crashes on a lost
> executor which itself failed due to a OutOfMemory error as below. This
> looks
> almost identical to https://issues.apache.org/jira/browse/SPARK-4885 even
> though we are seeing this error in Spark 1.1
>
> 2015-01-15 20:12:51,653 [handle-message-executor-13] ERROR
> org.apache.spark.executor.ExecutorUncaughtExceptionHandler - Uncaught
> exception in thread Thread[handle-message-executor-13,5,main]
> java.lang.OutOfMemoryError: Requested array size exceeds VM limit
> at java.util.Arrays.copyOf(Arrays.java:2271)
> at
> java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
> at
> java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
> at
> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
> at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
> at com.esotericsoftware.kryo.io.Output.flush(Output.java:155)
> at com.esotericsoftware.kryo.io.Output.require(Output.java:135)
> at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
> at com.esotericsoftware.kryo.io.Output.write(Output.java:183)
> at
>
> com.philips.hc.eici.analytics.streamingservice.AvroSerializer$$anonfun$write$1.apply(AnalyticsKryoRegistrator.scala:31)
> at
>
> com.philips.hc.eici.analytics.streamingservice.AvroSerializer$$anonfun$write$1.apply(AnalyticsKryoRegistrator.scala:30)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at
>
> com.philips.hc.eici.analytics.streamingservice.AvroSerializer.write(AnalyticsKryoRegistrator.scala:30)
> at
>
> com.philips.hc.eici.analytics.streamingservice.AvroSerializer.write(AnalyticsKryoRegistrator.scala:18)
> at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:501)
> at
>
> com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.write(FieldSerializer.java:564)
> at
>
> com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:213)
> at
> com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
> at
>
> org.apache.spark.serializer.KryoSerializationStream.writeObject(KryoSerializer.scala:119)
> at
>
> org.apache.spark.serializer.SerializationStream.writeAll(Serializer.scala:110)
> at
>
> org.apache.spark.storage.BlockManager.dataSerializeStream(BlockManager.scala:1047)
> at
>
> org.apache.spark.storage.BlockManager.dataSerialize(BlockManager.scala:1056)
> at
> org.apache.spark.storage.MemoryStore.getBytes(MemoryStore.scala:154)
> at
> org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:421)
> at
> org.apache.spark.storage.BlockManager.getLocalBytes(BlockManager.scala:387)
> at
>
> org.apache.spark.storage.BlockManagerWorker.getBlock(BlockManagerWorker.scala:100)
> at
>
> org.apache.spark.storage.BlockManagerWorker.processBlockMessage(BlockManagerWorker.scala:79)
> at
>
> org.apache.spark.storage.BlockManagerWorker$$anonfun$2.apply(BlockManagerWorker.scala:48)
> at
>
> org.apache.spark.storage.BlockManagerWorker$$anonfun$2.apply(BlockManagerWorker.scala:48)
> at
>
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>
>
> The driver log is as below
>
> 15/01/15 12:12:53 ERROR scheduler.DAGSchedulerActorSupervisor:
> eventProcesserActor failed; shutting down SparkContext
> java.util.NoSuchElementException: key not found: 2539
> at scala.collection.MapLike$class.default(MapLike.scala:228)
> at scala.collection.AbstractMap.default(Map.scala:58)
> at scala.collection.mutable.HashMap.apply(HashMap.scala:64)
> at
> org.apache.spark.scheduler.D

RE: OutOfMemory Error

2014-08-20 Thread Shao, Saisai
Hi Meethu,

The spark.executor.memory is the Java heap size of forked executor process. 
Increasing the spark.executor.memory can actually increase the runtime heap 
size of executor process.

For the details of Spark configurations, you can check: 
http://spark.apache.org/docs/latest/configuration.html

Thanks
Jerry

From: MEETHU MATHEW [mailto:meethu2...@yahoo.co.in]
Sent: Wednesday, August 20, 2014 4:48 PM
To: Akhil Das; Ghousia
Cc: user@spark.apache.org
Subject: Re: OutOfMemory Error


 Hi ,

How to increase the heap size?

What is the difference between spark executor memory and heap size?

Thanks & Regards,
Meethu M

On Monday, 18 August 2014 12:35 PM, Akhil Das 
mailto:ak...@sigmoidanalytics.com>> wrote:

I believe spark.shuffle.memoryFraction is the one you are looking for.

spark.shuffle.memoryFraction : Fraction of Java heap to use for aggregation and 
cogroups during shuffles, if spark.shuffle.spill is true. At any given time, 
the collective size of all in-memory maps used for shuffles is bounded by this 
limit, beyond which the contents will begin to spill to disk. If spills are 
often, consider increasing this value at the expense of 
spark.storage.memoryFraction.

You can give it a try.


Thanks
Best Regards

On Mon, Aug 18, 2014 at 12:21 PM, Ghousia 
mailto:ghousia.ath...@gmail.com>> wrote:
Thanks for the answer Akhil. We are right now getting rid of this issue by 
increasing the number of partitions. And we are persisting RDDs to DISK_ONLY. 
But the issue is with heavy computations within an RDD. It would be better if 
we have the option of spilling the intermediate transformation results to local 
disk (only in case if memory consumption is high)  . Do we have any such option 
available with Spark? If increasing the partitions is the only the way, then 
one might end up with OutOfMemory Errors, when working with certain algorithms 
where intermediate result is huge.

On Mon, Aug 18, 2014 at 12:02 PM, Akhil Das 
mailto:ak...@sigmoidanalytics.com>> wrote:
Hi Ghousia,

You can try the following:

1. Increase the heap 
size<https://spark.apache.org/docs/0.9.0/configuration.html>
2. Increase the number of 
partitions<http://stackoverflow.com/questions/21698443/spark-best-practice-for-retrieving-big-data-from-rdd-to-local-machine>
3. You could try persisting the RDD to use 
DISK_ONLY<http://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence>


Thanks
Best Regards

On Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj 
mailto:ghousia.ath...@gmail.com>> wrote:
Hi,

I am trying to implement machine learning algorithms on Spark. I am working
on a 3 node cluster, with each node having 5GB of memory. Whenever I am
working with slightly more number of records, I end up with OutOfMemory
Error. Problem is, even if number of records is slightly high, the
intermediate result from a transformation is huge and this results in
OutOfMemory Error. To overcome this, we are partitioning the data such that
each partition has only a few records.

Is there any better way to fix this issue. Some thing like spilling the
intermediate data to local disk?

Thanks,
Ghousia.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-Error-tp12275.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: 
user-h...@spark.apache.org<mailto:user-h...@spark.apache.org>






Re: OutOfMemory Error

2014-08-20 Thread MEETHU MATHEW


 Hi ,

How to increase the heap size?

What is the difference between spark executor memory and heap size?

Thanks & Regards, 
Meethu M


On Monday, 18 August 2014 12:35 PM, Akhil Das  
wrote:
 


I believe spark.shuffle.memoryFraction is the one you are looking for.

spark.shuffle.memoryFraction : Fraction of Java heap to use for aggregation and 
cogroups during shuffles, if spark.shuffle.spill is true. At any given time, 
the collective size of all in-memory maps used for shuffles is bounded by this 
limit, beyond which the contents will begin to spill to disk. If spills are 
often, consider increasing this value at the expense of 
spark.storage.memoryFraction.


You can give it a try.



Thanks
Best Regards


On Mon, Aug 18, 2014 at 12:21 PM, Ghousia  wrote:

Thanks for the answer Akhil. We are right now getting rid of this issue by 
increasing the number of partitions. And we are persisting RDDs to DISK_ONLY. 
But the issue is with heavy computations within an RDD. It would be better if 
we have the option of spilling the intermediate transformation results to local 
disk (only in case if memory consumption is high)  . Do we have any such option 
available with Spark? If increasing the partitions is the only the way, then 
one might end up with OutOfMemory Errors, when working with certain algorithms 
where intermediate result is huge.
>
>
>
>
>On Mon, Aug 18, 2014 at 12:02 PM, Akhil Das  wrote:
>
>Hi Ghousia,
>>
>>
>>You can try the following:
>>
>>
>>1. Increase the heap size
>>>2. Increase the number of partitions
>>>3. You could try persisting the RDD to use DISK_ONLY
>>
>>
>>
>>
>>Thanks
>>Best Regards
>>
>>
>>
>>On Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj  
>>wrote:
>>
>>Hi,
>>>
>>>I am trying to implement machine learning algorithms on Spark. I am working
>>>on a 3 node cluster, with each node having 5GB of memory. Whenever I am
>>>working with slightly more number of records, I end up with OutOfMemory
>>>Error. Problem is, even if number of records is slightly high, the
>>>intermediate result from a transformation is huge and this results in
>>>OutOfMemory Error. To overcome this, we are partitioning the data such that
>>>each partition has only a few records.
>>>
>>>Is there any better way to fix this issue. Some thing like spilling the
>>>intermediate data to local disk?
>>>
>>>Thanks,
>>>Ghousia.
>>>
>>>
>>>
>>>--
>>>View this message in context: 
>>>http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-Error-tp12275.html
>>>Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>>-
>>>To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>

Re: OutOfMemory Error

2014-08-19 Thread Ghousia
Hi,

Any further info on this??

Do you think it would be useful if we have a in memory buffer implemented
that stores the content of the new RDD. In case the buffer reaches a
configured threshold, content of the buffer are spilled to the local disk.
This saves us from OutOfMememory Error.

Appreciate any suggestions in this regard.

Many Thanks,
Ghousia.


On Mon, Aug 18, 2014 at 4:05 PM, Ghousia  wrote:

> But this would be applicable only to operations that have a shuffle phase.
>
> This might not be applicable to a simple Map operation where a record is
> mapped to a new huge value, resulting in OutOfMemory Error.
>
>
>
> On Mon, Aug 18, 2014 at 12:34 PM, Akhil Das 
> wrote:
>
>> I believe spark.shuffle.memoryFraction is the one you are looking for.
>>
>> spark.shuffle.memoryFraction : Fraction of Java heap to use for
>> aggregation and cogroups during shuffles, if spark.shuffle.spill is
>> true. At any given time, the collective size of all in-memory maps used for
>> shuffles is bounded by this limit, beyond which the contents will begin to
>> spill to disk. If spills are often, consider increasing this value at the
>> expense of spark.storage.memoryFraction.
>>
>> You can give it a try.
>>
>>
>> Thanks
>> Best Regards
>>
>>
>> On Mon, Aug 18, 2014 at 12:21 PM, Ghousia 
>> wrote:
>>
>>> Thanks for the answer Akhil. We are right now getting rid of this issue
>>> by increasing the number of partitions. And we are persisting RDDs to
>>> DISK_ONLY. But the issue is with heavy computations within an RDD. It would
>>> be better if we have the option of spilling the intermediate transformation
>>> results to local disk (only in case if memory consumption is high)  . Do we
>>> have any such option available with Spark? If increasing the partitions is
>>> the only the way, then one might end up with OutOfMemory Errors, when
>>> working with certain algorithms where intermediate result is huge.
>>>
>>>
>>> On Mon, Aug 18, 2014 at 12:02 PM, Akhil Das 
>>> wrote:
>>>
 Hi Ghousia,

 You can try the following:

 1. Increase the heap size
 
 2. Increase the number of partitions
 
 3. You could try persisting the RDD to use DISK_ONLY
 



 Thanks
 Best Regards


 On Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj >>> > wrote:

> Hi,
>
> I am trying to implement machine learning algorithms on Spark. I am
> working
> on a 3 node cluster, with each node having 5GB of memory. Whenever I am
> working with slightly more number of records, I end up with OutOfMemory
> Error. Problem is, even if number of records is slightly high, the
> intermediate result from a transformation is huge and this results in
> OutOfMemory Error. To overcome this, we are partitioning the data such
> that
> each partition has only a few records.
>
> Is there any better way to fix this issue. Some thing like spilling the
> intermediate data to local disk?
>
> Thanks,
> Ghousia.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-Error-tp12275.html
> Sent from the Apache Spark User List mailing list archive at
> Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

>>>
>>
>


Re: OutOfMemory Error

2014-08-18 Thread Ghousia
But this would be applicable only to operations that have a shuffle phase.

This might not be applicable to a simple Map operation where a record is
mapped to a new huge value, resulting in OutOfMemory Error.



On Mon, Aug 18, 2014 at 12:34 PM, Akhil Das 
wrote:

> I believe spark.shuffle.memoryFraction is the one you are looking for.
>
> spark.shuffle.memoryFraction : Fraction of Java heap to use for
> aggregation and cogroups during shuffles, if spark.shuffle.spill is true.
> At any given time, the collective size of all in-memory maps used for
> shuffles is bounded by this limit, beyond which the contents will begin to
> spill to disk. If spills are often, consider increasing this value at the
> expense of spark.storage.memoryFraction.
>
> You can give it a try.
>
>
> Thanks
> Best Regards
>
>
> On Mon, Aug 18, 2014 at 12:21 PM, Ghousia 
> wrote:
>
>> Thanks for the answer Akhil. We are right now getting rid of this issue
>> by increasing the number of partitions. And we are persisting RDDs to
>> DISK_ONLY. But the issue is with heavy computations within an RDD. It would
>> be better if we have the option of spilling the intermediate transformation
>> results to local disk (only in case if memory consumption is high)  . Do we
>> have any such option available with Spark? If increasing the partitions is
>> the only the way, then one might end up with OutOfMemory Errors, when
>> working with certain algorithms where intermediate result is huge.
>>
>>
>> On Mon, Aug 18, 2014 at 12:02 PM, Akhil Das 
>> wrote:
>>
>>> Hi Ghousia,
>>>
>>> You can try the following:
>>>
>>> 1. Increase the heap size
>>> 
>>> 2. Increase the number of partitions
>>> 
>>> 3. You could try persisting the RDD to use DISK_ONLY
>>> 
>>>
>>>
>>>
>>> Thanks
>>> Best Regards
>>>
>>>
>>> On Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj 
>>> wrote:
>>>
 Hi,

 I am trying to implement machine learning algorithms on Spark. I am
 working
 on a 3 node cluster, with each node having 5GB of memory. Whenever I am
 working with slightly more number of records, I end up with OutOfMemory
 Error. Problem is, even if number of records is slightly high, the
 intermediate result from a transformation is huge and this results in
 OutOfMemory Error. To overcome this, we are partitioning the data such
 that
 each partition has only a few records.

 Is there any better way to fix this issue. Some thing like spilling the
 intermediate data to local disk?

 Thanks,
 Ghousia.



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-Error-tp12275.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


>>>
>>
>


Re: OutOfMemory Error

2014-08-18 Thread Akhil Das
I believe spark.shuffle.memoryFraction is the one you are looking for.

spark.shuffle.memoryFraction : Fraction of Java heap to use for aggregation
and cogroups during shuffles, if spark.shuffle.spill is true. At any given
time, the collective size of all in-memory maps used for shuffles is
bounded by this limit, beyond which the contents will begin to spill to
disk. If spills are often, consider increasing this value at the expense of
spark.storage.memoryFraction.

You can give it a try.


Thanks
Best Regards


On Mon, Aug 18, 2014 at 12:21 PM, Ghousia  wrote:

> Thanks for the answer Akhil. We are right now getting rid of this issue by
> increasing the number of partitions. And we are persisting RDDs to
> DISK_ONLY. But the issue is with heavy computations within an RDD. It would
> be better if we have the option of spilling the intermediate transformation
> results to local disk (only in case if memory consumption is high)  . Do we
> have any such option available with Spark? If increasing the partitions is
> the only the way, then one might end up with OutOfMemory Errors, when
> working with certain algorithms where intermediate result is huge.
>
>
> On Mon, Aug 18, 2014 at 12:02 PM, Akhil Das 
> wrote:
>
>> Hi Ghousia,
>>
>> You can try the following:
>>
>> 1. Increase the heap size
>> 
>> 2. Increase the number of partitions
>> 
>> 3. You could try persisting the RDD to use DISK_ONLY
>> 
>>
>>
>>
>> Thanks
>> Best Regards
>>
>>
>> On Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj 
>> wrote:
>>
>>> Hi,
>>>
>>> I am trying to implement machine learning algorithms on Spark. I am
>>> working
>>> on a 3 node cluster, with each node having 5GB of memory. Whenever I am
>>> working with slightly more number of records, I end up with OutOfMemory
>>> Error. Problem is, even if number of records is slightly high, the
>>> intermediate result from a transformation is huge and this results in
>>> OutOfMemory Error. To overcome this, we are partitioning the data such
>>> that
>>> each partition has only a few records.
>>>
>>> Is there any better way to fix this issue. Some thing like spilling the
>>> intermediate data to local disk?
>>>
>>> Thanks,
>>> Ghousia.
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-Error-tp12275.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>


Re: OutOfMemory Error

2014-08-17 Thread Ghousia
Thanks for the answer Akhil. We are right now getting rid of this issue by
increasing the number of partitions. And we are persisting RDDs to
DISK_ONLY. But the issue is with heavy computations within an RDD. It would
be better if we have the option of spilling the intermediate transformation
results to local disk (only in case if memory consumption is high)  . Do we
have any such option available with Spark? If increasing the partitions is
the only the way, then one might end up with OutOfMemory Errors, when
working with certain algorithms where intermediate result is huge.


On Mon, Aug 18, 2014 at 12:02 PM, Akhil Das 
wrote:

> Hi Ghousia,
>
> You can try the following:
>
> 1. Increase the heap size
> 
> 2. Increase the number of partitions
> 
> 3. You could try persisting the RDD to use DISK_ONLY
> 
>
>
>
> Thanks
> Best Regards
>
>
> On Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj 
> wrote:
>
>> Hi,
>>
>> I am trying to implement machine learning algorithms on Spark. I am
>> working
>> on a 3 node cluster, with each node having 5GB of memory. Whenever I am
>> working with slightly more number of records, I end up with OutOfMemory
>> Error. Problem is, even if number of records is slightly high, the
>> intermediate result from a transformation is huge and this results in
>> OutOfMemory Error. To overcome this, we are partitioning the data such
>> that
>> each partition has only a few records.
>>
>> Is there any better way to fix this issue. Some thing like spilling the
>> intermediate data to local disk?
>>
>> Thanks,
>> Ghousia.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-Error-tp12275.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>


Re: OutOfMemory Error

2014-08-17 Thread Akhil Das
Hi Ghousia,

You can try the following:

1. Increase the heap size

2. Increase the number of partitions

3. You could try persisting the RDD to use DISK_ONLY




Thanks
Best Regards


On Mon, Aug 18, 2014 at 10:40 AM, Ghousia Taj 
wrote:

> Hi,
>
> I am trying to implement machine learning algorithms on Spark. I am working
> on a 3 node cluster, with each node having 5GB of memory. Whenever I am
> working with slightly more number of records, I end up with OutOfMemory
> Error. Problem is, even if number of records is slightly high, the
> intermediate result from a transformation is huge and this results in
> OutOfMemory Error. To overcome this, we are partitioning the data such that
> each partition has only a few records.
>
> Is there any better way to fix this issue. Some thing like spilling the
> intermediate data to local disk?
>
> Thanks,
> Ghousia.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/OutOfMemory-Error-tp12275.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>