[ 
https://issues.apache.org/jira/browse/STORM-634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14295776#comment-14295776
 ] 

Parth Brahmbhatt commented on STORM-634:
----------------------------------------

I have made some progress on this. I converted SupervisorInfo, Assignment and 
StormBase to thrift structures and I am now working on converting the 
ZkWorkerHeartBeat to thrift.While working on thriftifying these structures I 
have realized that the defrecords for all these structures are pretty 
meaningless. The code does not really adhere to fields defined in defrecord and 
does not take advantage of defrecord structure in any way(no defaults, no 
validations, not implementing any protocol or interfaces, not used in any 
multimethods). It treats these structures as open maps and that is actually 
very very confusing. For example StormBase does not define "delay_secs" or 
"previous_state" as fields but when we do rebalance or kill transition we still 
store this information as part of StormBase instance.

The simplest way to address this jira would be to just get rid of these 
defrecords and instead create mk-X methods for each one of them which just 
returns a map representing the structure. The reason serialization breaks 
currently is because defrecords are treated as java classes and serialized 
accordingly. If we just make all these structure open maps that is no longer an 
issue. If the code starts making assumptions about certain keys will always 
exist in the map that is equivalent to adding a new required field in the 
thrift structure which is not going to be backward compatible anyways. You 
would think we will lose some redability if things were marked as maps instead 
of predefined types but I think we will actually gain redability as we are not 
respecting the defined type anyways which again is super confusing. The only 
advantage I see for moving to thrift at this point would be for future projects 
like moving the heartbeat from zookeeper to nimbus. 

[~revans2] [~ptgoetz] Would like to hear your opinion. 

> Storm should support rolling upgrade/downgrade of storm cluster.
> ----------------------------------------------------------------
>
>                 Key: STORM-634
>                 URL: https://issues.apache.org/jira/browse/STORM-634
>             Project: Apache Storm
>          Issue Type: Improvement
>            Reporter: Parth Brahmbhatt
>            Assignee: Parth Brahmbhatt
>
> Currently when a new version of storm is released in order to upgrade 
> existing storm clusters users need to backup their existing topologies , kill 
> all the topologies , perform the upgrade and resubmit all the topologies. 
> This is painful and results in downtime which may not be acceptable for 
> "Always alive"  production systems.
> Storm should support a rolling  upgrade/downgrade deployment process to avoid 
> these downtimes and to make the transition to a different version effortless. 
> Based on my initial attempt the primary issue seem to be the java 
> serialization used to serialize java classes like StormBase, Assignment, 
> WorkerHeartbeat which is then stored in zookeeper. When deserializing if the 
> serial versions do not match the deserialization fails resulting in processes 
> just getting killed indefinitely. We need to change the Utils/serialize and 
> Utils/deserialize so it can support non java serialization mechanism like 
> json. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to