raction of java heap to
use as the SortBuffer area.
You can find more information in this Jira:
https://issues.apache.org/jira/browse/SPARK-2045
Yong
Date: Fri, 2 Oct 2015 11:55:41 -0700
Subject: Re: Problem understanding spark word count execution
From: kar...@bluedata.com
To: java8...@hotm
are using
> "rdd.collect()", which will transfer the final result to driver and dump it
> in the console.
>
> Yong
>
> --
> Date: Fri, 2 Oct 2015 00:50:24 -0700
> Subject: Re: Problem understanding spark word count execution
> From: kar...@bluedata.com
> To: java8
rdd.collect()", which will transfer the final result to driver and dump it in
the console.
Yong
Date: Fri, 2 Oct 2015 00:50:24 -0700
Subject: Re: Problem understanding spark word count execution
From: kar...@bluedata.com
To: java8...@hotmail.com
CC: nicolae.maras...@adswizz.com; user@spa
n't have
> other explaining of its meaning.
>
> If you finally output shows hundreds of unique words, then it is.
>
> The 2000 bytes sent to driver is the final output aggregated on the
> reducers end, and merged back to the driver.
>
> Yong
>
>
> ---
utput aggregated on the reducers
end, and merged back to the driver.
Yong
Date: Thu, 1 Oct 2015 13:33:59 -0700
Subject: Re: Problem understanding spark word count execution
From: kar...@bluedata.com
To: nicolae.maras...@adswizz.com
CC: user@spark.apache.org
Hi Nicolae,Thanks for the reply. To further cl
> Maybe you can share more of your context if still unclear.
> I just made assumptions to give clarity on a similar thing.
>
> Nicu
> ------
> *From:* Kartik Mathur
> *Sent:* Thursday, October 1, 2015 10:25 PM
> *To:* Nicolae Marasoiu
> *Cc:* user
o give clarity on a similar thing.
Nicu
From: Kartik Mathur
Sent: Thursday, October 1, 2015 10:25 PM
To: Nicolae Marasoiu
Cc: user
Subject: Re: Problem understanding spark word count execution
Thanks Nicolae ,
So In my case all executers are sending results back to
ap and map are narrow dependencies, meaning
> they can usually happen on the local node, I bet shuffle is just sending
> out the textFile to a few nodes to distribute the partitions.
>
>
> --
> *From:* Kartik Mathur
> *Sent:* Thursday, October 1, 201
ode, I bet shuffle is just sending out the
textFile to a few nodes to distribute the partitions.
From: Kartik Mathur
Sent: Thursday, October 1, 2015 12:42 AM
To: user
Subject: Problem understanding spark word count execution
Hi All,
I tried running spark word co
Hi All,
I tried running spark word count and I have couple of questions -
I am analyzing stage 0 , i.e
*sc.textFile -> flatMap -> Map (Word count example)*
1) In the *Stage logs* under Application UI details for every task I am
seeing Shuffle write as 2.7 KB, *question - how can I know where al
10 matches
Mail list logo