RE: Problem understanding spark word count execution

2015-10-02 Thread java8964
raction of java heap to use as the SortBuffer area. You can find more information in this Jira: https://issues.apache.org/jira/browse/SPARK-2045 Yong Date: Fri, 2 Oct 2015 11:55:41 -0700 Subject: Re: Problem understanding spark word count execution From: kar...@bluedata.com To: java8...@hotm

Re: Problem understanding spark word count execution

2015-10-02 Thread Kartik Mathur
are using > "rdd.collect()", which will transfer the final result to driver and dump it > in the console. > > Yong > > -- > Date: Fri, 2 Oct 2015 00:50:24 -0700 > Subject: Re: Problem understanding spark word count execution > From: kar...@bluedata.com > To: java8

RE: Problem understanding spark word count execution

2015-10-02 Thread java8964
rdd.collect()", which will transfer the final result to driver and dump it in the console. Yong Date: Fri, 2 Oct 2015 00:50:24 -0700 Subject: Re: Problem understanding spark word count execution From: kar...@bluedata.com To: java8...@hotmail.com CC: nicolae.maras...@adswizz.com; user@spa

Re: Problem understanding spark word count execution

2015-10-02 Thread Kartik Mathur
n't have > other explaining of its meaning. > > If you finally output shows hundreds of unique words, then it is. > > The 2000 bytes sent to driver is the final output aggregated on the > reducers end, and merged back to the driver. > > Yong > > > ---

RE: Problem understanding spark word count execution

2015-10-01 Thread java8964
utput aggregated on the reducers end, and merged back to the driver. Yong Date: Thu, 1 Oct 2015 13:33:59 -0700 Subject: Re: Problem understanding spark word count execution From: kar...@bluedata.com To: nicolae.maras...@adswizz.com CC: user@spark.apache.org Hi Nicolae,Thanks for the reply. To further cl

Re: Problem understanding spark word count execution

2015-10-01 Thread Kartik Mathur
> Maybe you can share more of your context if still unclear. > I just made assumptions to give clarity on a similar thing. > > Nicu > ---------- > *From:* Kartik Mathur > *Sent:* Thursday, October 1, 2015 10:25 PM > *To:* Nicolae Marasoiu > *Cc:* user

Re: Problem understanding spark word count execution

2015-10-01 Thread Nicolae Marasoiu
o give clarity on a similar thing. Nicu From: Kartik Mathur Sent: Thursday, October 1, 2015 10:25 PM To: Nicolae Marasoiu Cc: user Subject: Re: Problem understanding spark word count execution Thanks Nicolae , So In my case all executers are sending results back to

Re: Problem understanding spark word count execution

2015-10-01 Thread Kartik Mathur
Thanks Nicolae , So In my case all executers are sending results back to the driver and and " *shuffle* *is just sending out the textFile to distribute the partitions", *could you please elaborate on this ? what exactly is in this file ? On Wed, Sep 30, 2015 at 9:57 PM, Nicolae Marasoiu < nicolae

Re: Problem understanding spark word count execution

2015-09-30 Thread Nicolae Marasoiu
Hi, 2- the end results are sent back to the driver; the shuffles are transmission of intermediate results between nodes such as the -> which are all intermediate transformations. More precisely, since flatMap and map are narrow dependencies, meaning they can usually happen on the local node,