Re: Use cases for kafka direct stream messageHandler

2016-03-09 Thread Alan Braithwaite
I'd probably prefer to keep it the way it is, unless it's becoming more like the function without the messageHandler argument. Right now I have code like this, but I wish it were more similar looking: if (parsed.partitions.isEmpty()) { JavaPairInputDStream

Re: Spark Streaming Specify Kafka Partition

2015-12-03 Thread Alan Braithwaite
et me know. > > Be aware that if you're doing a count() or take() operation directly on > the rdd it'll definitely give you the wrong result if you're using -1 for > one of the offsets. > > > > On Tue, Dec 1, 2015 at 9:58 AM, Alan Braithwaite <a...@cloudflare.com> >

Re: Spark Streaming Specify Kafka Partition

2015-12-01 Thread Alan Braithwaite
u specify > fromOffsets: Map[TopicAndPartition, Long] > > On Mon, Nov 30, 2015 at 7:43 PM, Alan Braithwaite <a...@cloudflare.com> > wrote: > >> Is there any mechanism in the kafka streaming source to specify the exact >> partition id th

Spark Streaming Specify Kafka Partition

2015-11-30 Thread Alan Braithwaite
Is there any mechanism in the kafka streaming source to specify the exact partition id that we want a streaming job to consume from? If not, is there a workaround besides writing our a custom receiver? Thanks, - Alan

Weird Spark Dispatcher Offers?

2015-10-02 Thread Alan Braithwaite
Hey All, Using spark with mesos and docker. I'm wondering if anybody's seen the behavior of spark dispatcher where it just continually requests resources and immediately declines the offer. https://gist.github.com/anonymous/41e7c91899b0122b91a7 I'm trying to debug some issues with spark and

Re: Weird Spark Dispatcher Offers?

2015-10-02 Thread Alan Braithwaite
> So if there is no jobs to run the dispatcher will decline all offers by > default. > > Also we list all the jobs enqueued and it's specifications in the Spark > dispatcher UI, you should see the port in the dispatcher logs itself. > > Tim > > On Fri, Oct 2, 2015 at 11:46

Re: Weird Spark Dispatcher Offers?

2015-10-02 Thread Alan Braithwaite
2, 2015 at 11:36 AM, Tim Chen <t...@mesosphere.io> wrote: > Do you have jobs enqueued? And if none of the jobs matches any offer it > will just decline it. > > What's your job resource specifications? > > Tim > > On Fri, Oct 2, 2015 at 11:34 AM, Alan Braithwaite <a.

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-21 Thread Alan Braithwaite
onfiguration in the conf directory, but we try to pass all properties > submitted from the driver spark-submit through which I believe will > override the defaults. > > This is not what you are seeing? > > Tim > > > On Sep 19, 2015, at 9:01 AM, Alan Braithwaite <a...

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Alan Braithwaite
One other piece of information: We're using zookeeper for persistence and when we brought the dispatcher back online, it crashed on the same exception after loading the config from zookeeper. Cheers, - Alan On Thu, Sep 17, 2015 at 12:29 PM, Alan Braithwaite <a...@cloudflare.com> wrote:

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Alan Braithwaite
Hey All, To bump this thread once again, I'm having some trouble using the dispatcher as well. I'm using Mesos Cluster Manager with Docker Executors. I've deployed the dispatcher as Marathon job. When I submit a job using spark submit, the dispatcher writes back that the submission was

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Alan Braithwaite
Small update: I found properties-file spark-submit parameter by reading the code and that seems to work, but appears to be undocumented in the submitting applications doc page. - Alan On Thu, Sep 17, 2015 at 12:39 PM, Alan Braithwaite <a...@cloudflare.com> wrote: > One ot

Re: Spark on Mesos with Jobs in Cluster Mode Documentation

2015-09-17 Thread Alan Braithwaite
ly how you will need to launch it > with client mode. > > But indeed it shouldn't crash dispatcher, I'll take a closer look when I > get a chance. > > Can you recommend changes on the documentation, either in email or a PR? > > Thanks! > > Tim > > Sent from my iPhone >

Re: Spark-submit fails when jar is in HDFS

2015-08-09 Thread Alan Braithwaite
Did you try this way? /usr/local/spark/bin/spark-submit --master mesos://mesos.master:5050 --conf spark.mesos.executor.docker.image=docker.repo/spark:latest --class org.apache.spark.examples.SparkPi --jars hdfs://hdfs1/tmp/spark-examples-1.4.1-hadoop2.6.0-cdh5.4.4.jar 100 I did, and got