It's similar to dealing with topology on a remote cluster. This is a
step-by-step instructions:
1. Edit your storm.yaml config, change the nimbus.host in the yaml to your
Docker nimbus's container IP (also make sure that nimbus container's ports
are published properly).
2. From your client, use *s
#9 - 5 points.
On Tue, Jun 10, 2014 at 4:32 AM, Andrew Neilson
wrote:
> #10 - 5 points
>
>
> On Mon, Jun 9, 2014 at 2:25 PM, Benjamin Black wrote:
>
>> #10 - 5 pts
>>
>>
>> On Mon, Jun 9, 2014 at 2:22 PM, Ted Dunning
>> wrote:
>>
>>>
>>> I love it. This is a real horse race!
>>>
>>>
>>>
>>>
You can read about storm replaying mechanism here, the fail(Object msgId) will
be invoked when your tuple's timedout from the spout, the replaying
mechanism should be implemented there
http://storm.incubator.apache.org/documentation/Concepts.html
On Thu, Jun 5, 2014 at 9:24 AM, Nhan Nguy
Have you enabled the acker and override the fail() method on your Spout?
On Thu, Jun 5, 2014 at 9:15 AM, 傅駿浩 wrote:
> Hi, all
>
> I do the simple topology in distributed mode as follows:
> Spout: collector.emit(new Values(index)); //where index is a integer from
> 1,2,..to 10, each nex
11# - 3 points
10# - 2 points
On Sat, May 17, 2014 at 7:48 AM, Jason Jackson wrote:
> #10 - 5 points.
>
>
> On Fri, May 16, 2014 at 1:34 PM, Brian Enochson
> wrote:
>
>>
>> #10 - 3 Points.
>> #1 - 1 Point
>> #2 - 1 Point
>>
>> Thanks,
>> Brian
>>
>>
>>
>> On Thu, May 15, 2014 at 12:28 PM, P. T
Hi,
You can't submit a topology to supervisor directly, topology must be submit
via nimbus via "storm jar" command, then nimbus will distribute the
topology to supervisors in the cluster, your topology would end up running
on supervisor anyway. If a supevisor is down when running, its workload
will
Hi all,
I'm wondering if there is anyway to retrieve a tuple from its messageId, I
need to get the tuple from spout's fail() method to rollback some
operations after the tuple failed.
Thank you & Regards
Nhan Nguyen