Can you please elaborate, I didn't get what you intended for me to read in
that link.
Regards.
On Mon, Oct 20, 2014 at 7:03 PM, Saurabh Wadhawan
saurabh.wadha...@guavus.com wrote:
What about:
1. All RDD operations are executed in workers. So reading a text file or
executing val x = 1 will happen on worker. (link
http://stackoverflow.com/questions/24637312/spark-driver-in-apache-spark)
2.
a. Without braodcast: Let's say you have 'n' nodes. You can set hadoop's
replication factor to n
What about:
http://mail-archives.apache.org/mod_mbox/spark-user/201310.mbox/%3CCAF_KkPwk7iiQVD2JzOwVVhQ_U2p3bPVM=-bka18v4s-5-lp...@mail.gmail.com%3Ehttp://mail-archives.apache.org/mod_mbox/spark-user/201310.mbox/CAF_KkPwk7iiQVD2JzOwVVhQ_U2p3bPVM=-bka18v4s-5-lp...@mail.gmail.com
Regards
-
Any response for this?
1. How do I know what statements will be executed on worker side out of the
spark script in a stage.
e.g. if I have
val x = 1 (or any other code)
in my driver code, will the same statements be executed on the worker side
in a stage?
2. How can I do a map side
Hi,
I have following questions:
1. When I write a spark script, how do I know what part runs on the driver side
and what runs on the worker side.
So lets say, I write code to to read a plain text file.
Will it run on driver side only or will it run on server side only or on
both sides