Hi,
I have an issue getting Spark jobs to run on a mesos cluster.
(most probably it's a config issue - I hope - but let me explain what I
did) :
- installed mesos on a cluster cluster (1 master and 3 workers) with
zookeeper support.
- mesos is running fine :
curl
Hey, this seems to be a problem in the docs about how to set the executor URI.
It looks like the SPARK_EXECUTOR_URI variable is not actually used. Instead,
set the spark.executor.uri Java system property using
System.setProperty(spark.executor.uri, your URI) before you create a
SparkContext.
Hi Ryan,
Spark Streaming ships with a special version of the Kafka 0.7.2 client that we
ported to Scala 2.9, and you need to add that as a JAR explicitly in your
project. The JAR is in
streaming/lib/org/apache/kafka/kafka/0.7.2-spark/kafka-0.7.2-spark.jar under
Spark. The streaming/lib
Hi Alex,
Unfortunately there seems to be something wrong with how the generics on that
method get seen by Java. You can work around it by calling this with:
plans.saveAsHadoopFiles(hdfs://localhost:8020/user/hue/output/completed,
csv, String.class, String.class, (Class)
Hi Matei,
Ok thanks I will try it. Indeed using saveAsNewAPIHadoopFile was not
working, as TableOutputFormat implements Configurable and its setConf
method was never called.
BTW you have done great job with spark, it combines so nicely with scala,
the api is clean and is really easy to work
Out of curiosity, does the Scala 2.10 Spark interpreter patch
fix this using macros as Matei suggests in the linked discussion? Or is
that still future work, but now possible?
On Fri, Oct 11, 2013 at 6:04 PM, Reynold Xin r...@apache.org wrote:
This is a known problem and has to do with
That's a TODO that is either now possible in the 2.10 branch or pretty
close to possible -- which isn't the same thing as easy.
On Sat, Oct 12, 2013 at 2:20 PM, Aaron Davidson ilike...@gmail.com wrote:
Out of curiosity, does the Scala 2.10 Spark interpreter patch
fix this using macros as