Hi all,
(Waking up an old thread just for future reference)
We've had a very similar issue just a couple of days ago: executing a spark
driver on the same host as where the mesos master runs succeeds, but
executing it on our remote dev station hangs fails after mesos report the
spark driver
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/RecoverableNetworkWordCount.scala)*
S
On Fri, Sep 26, 2014 at 3:09 PM, Svend Vanderveken
svend.vanderve...@gmail.com wrote:
Hi all,
I apologise for re-posting this, I realise some
is packaged
with CDH 5.1.0 and Hive:
sbt/sbt clean assembly/assembly -Dhadoop.version=2.3.0-mr1-cdh5.1.0 -Phive
./make-distribution.sh --tgz --skip-java-test
-Dhadoop.version=2.3.0-mr1-cdh5.1.0 -Phive
Any comment or suggestion would be greatly appreciated.
On Thu, Sep 25, 2014 at 4:20 PM, Svend
I experience spark streaming restart issues similar to what is discussed in
the 2 threads below (in which I failed to find a solution). Could anybody
let me know if anything is wrong in the way I start/stop or if this could be
a spark bug?
Hi,
Yes, the error still occurs when we replace the lambdas with named
functions:
(same error traces as in previous posts)
--
View this message in context:
with the
same installation works fine:
The hdfs files contain just plain csv files:
spark-env.sh look like this:
Any help, comment or pointer would be greatly appreciated!
Thanks in advance
Svend
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com
Hi Michael,
Thanks for your reply. Yes, the reduce triggered the actual execution, I got
a total length (totalLength: 95068762, for the record).
--
View this message in context: