I am not familiar with that particular piece of code. But the spark's
concurrency comes from Multi-thread. One executor will use multi threads to
process tasks, and these tasks share the JVM memory of the executor. So it
won't be surprised that Spark needs some blocking/sync for the memory some
Hi Yong,
It makes sense...almost. :) I'm not sure how relevant it is, but just
today was reviewing BlockInfoManager code with the locks for reading
and writing, and my understanding of the code shows that Spark if fine
when there are multiple attempts for writes of new memory blocks
(pages) with
That just makes sense, doesn't it?
The only place will be driver. If not, the executor will be having contention
by whom should create the directory in this case.
Only the coordinator (driver in this case) is the best place for doing it.
Yong
From: math...@closetwork.org
Date: Wed, 25 May 2016
Experience. I don't use Mesos or Yarn or Hadoop, so I don't know.
On Wed, May 25, 2016 at 2:51 AM Jacek Laskowski wrote:
> Hi Mathieu,
>
> Thanks a lot for the answer! I did *not* know it's the driver to
> create the directory.
>
> You said "standalone mode", is this the case
Hi Mathieu,
Thanks a lot for the answer! I did *not* know it's the driver to
create the directory.
You said "standalone mode", is this the case for the other modes -
yarn and mesos?
p.s. Did you find it in the code or...just experienced before? #curious
Pozdrawiam,
Jacek Laskowski
Thanks Mathieu,
So either I must have shared filesystem OR Hadoop as filesystem in order to write data from Standalone mode cluster setup environment. Thanks for your input.
Regards
Stuti Awasthi
From: Mathieu Longtin [math...@closetwork.org]
Sent: Tuesday, May 24, 2016 7:34 PM
To: Stuti
In standalone mode, executor assume they have access to a shared file
system. The driver creates the directory and the executor write files, so
the executors end up not writing anything since there is no local directory.
On Tue, May 24, 2016 at 8:01 AM Stuti Awasthi wrote:
hi Jacek,
Parent directory already present, its my home directory. Im using Linux (Redhat) machine 64 bit.
Also I noticed that "test1" folder is created in my master with subdirectory as "_temporary" which is empty. but on slaves, no such directory is created under /home/stuti.
Thanks
StutiĀ
Hi,
What happens when you create the parent directory /home/stuti? I think the
failure is due to missing parent directories. What's the OS?
Jacek
On 24 May 2016 11:27 a.m., "Stuti Awasthi" wrote:
Hi All,
I have 3 nodes Spark 1.6 Standalone mode cluster with 1 Master and