setNumWorkers is already set to 1 for all topologies.
The problem I'm facing is the same as this person:
https://groups.google.com/forum/#!topic/storm-user/wLOq1nImRWQ
I have 5 AWS supervisor nodes running, and theoretically, there should be
5*4=20 worker slots. But when I submit 5 topologies on the 6th AWS server
that contains Nimbus, only 4 topologies get one worker each. The 5th
topology gets 0 workers.

On Tue, Jul 19, 2016 at 12:56 PM, Sai Dilip Reddy Kiralam <
dkira...@aadhya-analytics.com> wrote:

>
> what I understood from side is you want to use all the supervisor slots.if
> i'm right
> problem will be on coding side.you will specify how many workers you need
> to run the topology.once check Set workers statement.
>
> conf.setNumWorkers(2);
>
>
>
>
>
> *Best regards,*
>
> *K.Sai Dilip Reddy.*
>
> On Tue, Jul 19, 2016 at 12:43 PM, Navin Ipe <
> navin....@searchlighthealth.com> wrote:
>
>> There is another person who has faced the same problem earlier:
>> https://groups.google.com/forum/#!topic/storm-user/wLOq1nImRWQ
>>
>> *This is what I've found out till now:*
>>
>>    - Theoretically, having 4 supervisors and 4 default slots on each,
>>    should give you 4*4=16 workers (but in reality, that does not work).
>>    - Every worker is a JVM (the slots.ports)
>>    - Number of workers = max number of slots a topology can occupy
>>    - Single worker JVM executes code of a single topology
>>
>>
>> Is there anyone who has experienced this problem or knows how to solve it?
>>
>>
>> On Tue, Jul 19, 2016 at 10:16 AM, Navin Ipe <
>> navin....@searchlighthealth.com> wrote:
>>
>>> Any help please?
>>>
>>> On Mon, Jul 18, 2016 at 3:49 PM, Navin Ipe <
>>> navin....@searchlighthealth.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> On my local system I was able to run 5 topologies only when I increased
>>>> the number of slots in storm.yaml, to 5.
>>>>
>>>> But when submitting my jar to storm on an Amazon node, even though I
>>>> had 5 supervisor nodes running, only 4 of my topologies were assigned a
>>>> worker. The 5th topology got 0 workers.
>>>>
>>>> Why is this happening? Shouldn't each supervisor have 4 worker slots
>>>> available by default? So 5 supervisors would have 5*4=20 slots?
>>>>
>>>> I've read that I'm expected to configure the slots in the storm.yaml of
>>>> every supervisor and restart the supervisor. So if I configure each
>>>> supervisor to have 5 slots, will I be able to run 5*5=25 topologies?
>>>>
>>>> How exactly does this slots concept work and what was it meant for?
>>>>
>>>> --
>>>> Regards,
>>>> Navin
>>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Navin
>>>
>>
>>
>>
>> --
>> Regards,
>> Navin
>>
>
>


-- 
Regards,
Navin

Reply via email to