ields grouping, and
other groupings, regardless of number of bolt tasks. However if you create
tasks for this bolt using the same class but different name in the
topology, you can route long running bolts (without ack) to the separate
instances and they will not affect your normal processing.
On
on a signal spout (zookeeper signals)
> 3. Give your new topology a ridiculously high amount of time for
> processing a single tuple
> 4. Have your current topology use SignalClient to post a zookeeper message
> for the the new one, when the last tuple is ready to be processed
>
>
mount of time for
> processing a single tuple
> 4. Have your current topology use SignalClient to post a zookeeper message
> for the the new one, when the last tuple is ready to be processed
>
> On Tue, Jun 2, 2015 at 8:04 AM, Subrat Basnet wrote:
>
> Hi there,
>
> I
the the new one, when the last tuple is ready to be processed
>
> On Tue, Jun 2, 2015 at 8:04 AM, Subrat Basnet (mailto:sub...@myktm.com)> wrote:
> > Hi there,
> >
> > Is it normal to have long running bolts once in a while? When I say long
> > running, I’m t
s ready to be processed
On Tue, Jun 2, 2015 at 8:04 AM, Subrat Basnet wrote:
> Hi there,
>
> Is it normal to have long running bolts once in a while? When I say long
> running, I’m talking about a bolt that takes a few hours to process a tuple.
>
> I need to export data, push no
Hi there,
Is it normal to have long running bolts once in a while? When I say long
running, I’m talking about a bolt that takes a few hours to process a tuple.
I need to export data, push notifications and upload files with this when I
reach the LAST tuple of a sequence of tuples. This does
I'm trying to shutdown a long running bolt (based on the sample
ExclamationBolt) but it seems the cluster.killTopology() call is only
interrupting one task and continuing to execute other tasks. To simplify
things I am only running one bolt in the topology. This is the bolt code I
am running (local