Hi,
If you don’t anchor at all, collector.ack will basically do nothing (because
input.getMessageId().getAnchorsToIds() is empty).
It will accumulate the metrics (acked count) though. If you don’t acking at
all, you can set “topology.acker.executors” to zero.
Code is here:
Ack:
https://git
Hi,
All of my bolts in the topology implement BaseRichBolt and following is the
signature of prepare method, and I do not pass the input tuple as an argument
in the collector.emit of any of the bolts execute implementation. So,
essentially am not anchoring the tuples.
Since I am not anchoring
he first bolt, the spout was done. That’s not the case? It needs to
> be sure the whole topology handled the tuple?
>
> Patrick
>
>
> From: John Yost
> Reply-To: "user@storm.apache.org"
> Date: Wednesday, January 13, 2016 at 2:53 PM
> To: "user@storm.apach
lt;mailto:user@storm.apache.org>"
mailto:user@storm.apache.org>>
Date: Wednesday, January 13, 2016 at 2:53 PM
To: "user@storm.apache.org<mailto:user@storm.apache.org>"
mailto:user@storm.apache.org>>
Subject: Re: Kafka Spout failing despite all bolts acking
Hi Patrick,
This means the tuples emitted by the KafkaSpout are not making it through
your topology within the tuple timeout. I recommend seeing which Bolt has
capacity ~1, increase executors for that. Then start experimenting with
max.spout.pending and timeouts.
--John
On Wed, Jan 13, 2016 at 1
I’m seeing a lot of failures from a Kafka spout, despite no failures from any
of the processing bolts showing any failures.
I configure the spout like this:
SpoutConfig spoutConfig
= new SpoutConfig(new ZkHosts(zookeeper,"/brokers"),
TOPIC,