Thanks for the quick response.  I was assuming that once it got the ack from 
the first bolt, the spout was done.  That’s not the case?  It needs to be sure 
the whole topology handled the tuple?

Patrick


From: John Yost <hokiege...@gmail.com<mailto:hokiege...@gmail.com>>
Reply-To: "user@storm.apache.org<mailto:user@storm.apache.org>" 
<user@storm.apache.org<mailto:user@storm.apache.org>>
Date: Wednesday, January 13, 2016 at 2:53 PM
To: "user@storm.apache.org<mailto:user@storm.apache.org>" 
<user@storm.apache.org<mailto:user@storm.apache.org>>
Subject: Re: Kafka Spout failing despite all bolts acking

Hi Patrick,

This means the tuples emitted by the KafkaSpout are not making it through your 
topology within the tuple timeout. I recommend seeing which Bolt has capacity 
~1, increase executors for that. Then start experimenting with 
max.spout.pending and timeouts.

--John

On Wed, Jan 13, 2016 at 12:22 PM, Patrick May 
<patrick....@ignitionone.com<mailto:patrick....@ignitionone.com>> wrote:
I’m seeing a lot of failures from a Kafka spout, despite no failures from any 
of the processing bolts showing any failures.

I configure the spout like this:

    SpoutConfig spoutConfig
      = new SpoutConfig(new ZkHosts(zookeeper,"/brokers"),
                        TOPIC,
                        "/" + TOPIC,
                        kafkaClientName);
    spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
    spoutConfig.forceFromStart = false;

    KafkaSpout spout = new KafkaSpout(spoutConfig);
    . . .
    builder_.setSpout(“my-spout”,spout,2);

I’ve tried it with 16 to match the number of partitions, but get the same 
behavior.  There is nothing in the logs from the spout.

Has anyone else run into this?

Thanks,

Patrick


Reply via email to