Yes, so far they’ve been built on that assumption — not that Akka would
*guarantee* delivery in that as soon as the send() call returns you know it’s
delivered, but that Akka would act the same way as a TCP socket, allowing you
to send a stream of messages in order and hear when the connection
Hi
I am encounter an issue that the executor actor could not connect to Driver
actor. But I could not figure out what's the reason.
Say the Driver actor is listening on :35838
root@sr434:~# netstat -lpv
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address
The Apache MRQL team is pleased to announce the release of
Apache MRQL 0.9.0-incubating.
This is the first release under the Apache Incubator umbrella.
Apache MRQL is a query processing and optimization system for
large-scale, distributed data analysis, built on top of
Apache Hadoop, Hama, and
they are in streaming.dstream.WindowDStream.
thanks.
I've seen this happen before due to the driver doing long GCs when the
driver machine was heavily memory-constrained. For this particular issue,
simply freeing up memory used by other applications fixed the problem.
On Fri, Nov 1, 2013 at 12:14 AM, Liu, Raymond raymond@intel.com wrote:
Hi
Are there heuristics to check when the scheduler says it is missing
parents and just hangs?
On Thu, Oct 31, 2013 at 4:56 PM, Walrus theCat walrusthe...@gmail.comwrote:
Hi,
I'm not sure what's going on here. My code seems to be working thus far
(map at SparkLR:90 completed.) What can I do
Hello,
I am new to Spark and doing my first steps with it today.
Right now I am having trouble with the error:
ERROR Worker: Connection to master failed! Shutting down.
So far, I found out the following:
The standalone version of Spark (without Hadoop-HDFS and YARN) works
perfectly. The
Can you try to use the IP address instead of the name 'base'? I also experience
the same problem before, it worked after changed to IP.
Thanks,
Chen jingci
-Original Message-
From: Thorsten Bergler thorsten.berg...@tbonline.de
Sent: 2/11/2013 2:03
To: user@spark.incubator.apache.org
I think that parallelize() keeps its list in the driver to provide
resiliency for the RDDs that it creates: Spark doesn't know the lineage
that was used to create the items passed to parallelize(), so it needs to
keep a copy of those items in the driver to allow the RDD's blocks to be
recomputed.