[ 
https://issues.apache.org/jira/browse/HDDS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17155061#comment-17155061
 ] 

Sammi Chen commented on HDDS-3912:
----------------------------------

Hi [~pifta] and [~avijayan], do we need to put this into 0.6.0 to fix the 
compatibility issue ASAP? 

> Change SCM ContainerDB key to proto structure to support backward 
> compatibility.
> --------------------------------------------------------------------------------
>
>                 Key: HDDS-3912
>                 URL: https://issues.apache.org/jira/browse/HDDS-3912
>             Project: Hadoop Distributed Data Store
>          Issue Type: Task
>            Reporter: Aravindan Vijayan
>            Assignee: Istvan Fajth
>            Priority: Major
>
> Currently, the 'key' type of the SCM container DB is 
> org.apache.hadoop.hdds.scm.container.ContainerID which is not backed up by a 
> proto equivalent. Hence, we use a long codec to serialize and deserialize the 
> key from long to byte[ ] and back. 
> {code}
>   public static final DBColumnFamilyDefinition<ContainerID, ContainerInfo>
>       CONTAINERS =
>       new DBColumnFamilyDefinition<ContainerID, ContainerInfo>(
>           "containers",
>           ContainerID.class,
>           new ContainerIDCodec(),
>           ContainerInfo.class,
>           new ContainerInfoCodec());
> {code}
> In the future if we have to support a container id type that is more than 
> just a long, then changing the ContainerID class will break backward 
> compatibility. To handle this incompatibility in the future, we have to 
> either migrate old data or provide fallback conversion codecs for the old 
> data type. Hence, it is good to wrap this long into a proto structure. 
> cc [~nanda619]  / [~arp].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org

Reply via email to