I'd probably prefer to keep it the way it is, unless it's becoming more
like the function without the messageHandler argument.
Right now I have code like this, but I wish it were more similar looking:
if (parsed.partitions.isEmpty()) {
JavaPairInputDStream
et me know.
>
> Be aware that if you're doing a count() or take() operation directly on
> the rdd it'll definitely give you the wrong result if you're using -1 for
> one of the offsets.
>
>
>
> On Tue, Dec 1, 2015 at 9:58 AM, Alan Braithwaite <a...@cloudflare.com>
>
u specify
> fromOffsets: Map[TopicAndPartition, Long]
>
> On Mon, Nov 30, 2015 at 7:43 PM, Alan Braithwaite <a...@cloudflare.com>
> wrote:
>
>> Is there any mechanism in the kafka streaming source to specify the exact
>> partition id th
Is there any mechanism in the kafka streaming source to specify the exact
partition id that we want a streaming job to consume from?
If not, is there a workaround besides writing our a custom receiver?
Thanks,
- Alan
Hey All,
Using spark with mesos and docker.
I'm wondering if anybody's seen the behavior of spark dispatcher where it
just continually requests resources and immediately declines the offer.
https://gist.github.com/anonymous/41e7c91899b0122b91a7
I'm trying to debug some issues with spark and
> So if there is no jobs to run the dispatcher will decline all offers by
> default.
>
> Also we list all the jobs enqueued and it's specifications in the Spark
> dispatcher UI, you should see the port in the dispatcher logs itself.
>
> Tim
>
> On Fri, Oct 2, 2015 at 11:46
2, 2015 at 11:36 AM, Tim Chen <t...@mesosphere.io> wrote:
> Do you have jobs enqueued? And if none of the jobs matches any offer it
> will just decline it.
>
> What's your job resource specifications?
>
> Tim
>
> On Fri, Oct 2, 2015 at 11:34 AM, Alan Braithwaite <a.
onfiguration in the conf directory, but we try to pass all properties
> submitted from the driver spark-submit through which I believe will
> override the defaults.
>
> This is not what you are seeing?
>
> Tim
>
>
> On Sep 19, 2015, at 9:01 AM, Alan Braithwaite <a...
One other piece of information:
We're using zookeeper for persistence and when we brought the dispatcher
back online, it crashed on the same exception after loading the config from
zookeeper.
Cheers,
- Alan
On Thu, Sep 17, 2015 at 12:29 PM, Alan Braithwaite <a...@cloudflare.com>
wrote:
Hey All,
To bump this thread once again, I'm having some trouble using the
dispatcher as well.
I'm using Mesos Cluster Manager with Docker Executors. I've deployed the
dispatcher as Marathon job. When I submit a job using spark submit, the
dispatcher writes back that the submission was
Small update: I found properties-file spark-submit parameter by reading
the code and that seems to work, but appears to be undocumented in the
submitting applications doc page.
- Alan
On Thu, Sep 17, 2015 at 12:39 PM, Alan Braithwaite <a...@cloudflare.com>
wrote:
> One ot
ly how you will need to launch it
> with client mode.
>
> But indeed it shouldn't crash dispatcher, I'll take a closer look when I
> get a chance.
>
> Can you recommend changes on the documentation, either in email or a PR?
>
> Thanks!
>
> Tim
>
> Sent from my iPhone
>
Did you try this way?
/usr/local/spark/bin/spark-submit --master mesos://mesos.master:5050 --conf
spark.mesos.executor.docker.image=docker.repo/spark:latest --class
org.apache.spark.examples.SparkPi --jars
hdfs://hdfs1/tmp/spark-examples-1.4.1-hadoop2.6.0-cdh5.4.4.jar 100
I did, and got
13 matches
Mail list logo