Re: Maintaining overall cumulative data in Spark Streaming

2015-10-30 Thread Sandeep Giri
How to we reset the aggregated statistics to null? Regards, Sandeep Giri, +1 347 781 4573 (US) +91-953-899-8962 (IN) www.KnowBigData.com. Phone: +1-253-397-1945 (Office) [image: linkedin icon] [image: other site icon]

Re: Re: repartitionAndSortWithinPartitions task shuffle phase is very slow

2015-10-30 Thread Luke Han
Would love to have any suggestion or comments about our implementation. Is there anyone who has such experience? Thanks. Best Regards! - Luke Han On Tue, Oct 27, 2015 at 10:33 AM, ć‘šćƒæ˜Š wrote: > I have replace default java serialization with Kyro. > It indeed reduce the sh

Off-heap storage and dynamic allocation

2015-10-30 Thread Justin Uang
Hey guys, According to the docs for 1.5.1, when an executor is removed for dynamic allocation, the cached data is gone. If I use off-heap storage like tachyon, conceptually there isn't this issue anymore, but is the cached data still available in practice? This would be great because then we would

Re: test failed due to OOME

2015-10-30 Thread Ted Yu
This happened recently on Jenkins: https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.3,label=spark-test/3964/console On Sun, Oct 18, 2015 at 7:54 AM, Ted Yu wrote: > From > https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/HADO

Re: test failed due to OOME

2015-10-30 Thread shane knapp
here's the current heap settings on our workers: InitialHeapSize == 2.1G MaxHeapSize == 32G system ram: 128G we can bump it pretty easily... it's just a matter of deciding if we want to do this globally (super easy, but will affect ALL maven builds on our system -- not just spark) or on a per-j

Re: test failed due to OOME

2015-10-30 Thread Mridul Muralidharan
It is giving OOM at 32GB ? Something looks wrong with that ... that is already on the higher side. Regards, Mridul On Fri, Oct 30, 2015 at 11:28 AM, shane knapp wrote: > here's the current heap settings on our workers: > InitialHeapSize == 2.1G > MaxHeapSize == 32G > > system ram: 128G > > we c

Re: test failed due to OOME

2015-10-30 Thread Ted Yu
I noticed that the SparkContext created in each sub-test is not stopped upon finishing sub-test. Would stopping each SparkContext make a difference in terms of heap memory consumption ? Cheers On Fri, Oct 30, 2015 at 12:04 PM, Mridul Muralidharan wrote: > It is giving OOM at 32GB ? Something l