yes, I have implement this way.. and it ok in fact..
I implement a total ha solution for nimbus.
and our team write a total scheduler for storm(such as yarn for support
700+ cluster)

2014-09-10 10:02 GMT+08:00 Ankit Toshniwal <ankitoshni...@gmail.com>:

> Yes, that's a problem area, and we have been discussing it internally on
> how we can handle it better. We are considering moving to an HDFS based
> solution where Nimbus will upload the jars into hdfs instead of local disk
> (as that is a single point of failure) and supervisors will be downloading
> the jar's from hdfs as well.
>
> The other problem we ran into was nic saturation on Nimbus host since too
> many machines were doing copy of the jar's (180MB in size) to worker
> machines leading to the total increase in time. Thus, with moving to HDFS
> based solution we can do this more effectively and faster plus it scales
> better.
>
>  We do not have a working prototype for it, but something we are actively
> pursuing.
>
> Ankit
>
> On Tue, Sep 9, 2014 at 6:43 PM, 潘臻轩 <zhenxuan...@gmail.com> wrote:
>
>> I not agree Nathan, if just nimbus down, it is fail-fast.but if the
>> machine happen error(such as disk error), this may lead
>> topology clear.
>>
>> 2014-09-10 9:39 GMT+08:00 潘臻轩 <zhenxuan...@gmail.com>:
>>
>>> *According to my knowledge, is not the case。you should check it with
>>> script or other way.*
>>>
>>> 2014-09-10 0:49 GMT+08:00 Jiang Jacky <jiang0...@gmail.com>:
>>>
>>>> Hi, I read the articles about the nimbus, it specifies the nimbus
>>>> daemon is fail-fast. But I am not sure if it is like Hadoop, there is
>>>> secondary server for failover, if the nimbus server is totally down, then
>>>> the secondary server can be up. Thanks
>>>>
>>>
>>>
>>
>

Reply via email to