Github user revans2 commented on the pull request:

    https://github.com/apache/storm/pull/354#issuecomment-69212412
  
    From reading through the design document, my initial impressions are that 
we are coupling the Nimbus fail over and leader election too closely to having 
a persistent store for the data.  
    
    It feels to me like we want two different things.  One is a highly 
available store for blobs (we are working on an API for something similar to 
this for STORM-411) that we can write into it and query it to know that the 
blob has been persisted.  This could mean adequate replication, or whatever 
that blob store feels it needs.
    
    The second thing we want is leader election/fail over for nimbus.
    
    By separating the two, we can easily have something where we are running on 
YARN, and we only have one nimbus instance, but the data is stored in HDFS.  
Nimbus crashes a new one comes up else where and everything should work just 
fine.  Or we are running on EC2 and we want nimbus to be hot/warm, but I don't 
want 3 instances of nimbus because it costs too much, just 2 so I store the 
data in S3 instead.  Exposing the replication count feels like it is an 
internal detail of storage system, that we really don't care that much about.
    
    It really feels like it would give us a lot more flexibility to have the 
storage API completely separate from the fail over and leader election code. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to