[ 
https://issues.apache.org/jira/browse/HDDS-748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16667582#comment-16667582
 ] 

Anu Engineer commented on HDDS-748:
-----------------------------------

+1, on the proposed change. I think BadLands would be a good time to get this 
fixed. It will make the code reading much better, and reduce errors in code. 
Thanks for the proposal.

> Use strongly typed metadata Table implementation (proposal)
> -----------------------------------------------------------
>
>                 Key: HDDS-748
>                 URL: https://issues.apache.org/jira/browse/HDDS-748
>             Project: Hadoop Distributed Data Store
>          Issue Type: Wish
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>
> NOTE: This issue is a proposal. I assigned it to myself to make it clear that 
> it's not ready to implement, I just start a discussion about the proposed 
> change.
> org.apache.hadoop.utils.db.DBStore (from HDDS-356) is a new generation 
> MetadataStore to store all persistent state of hdds/ozone scm/om/datanodes.
> It supports column families with via the Table interface which supports 
> methods like:
> {code:java}
> byte[] get(byte[] key) throws IOException;
> void put(byte[] key, byte[] value)
> {code}
> In our current code we usually use static helpers to do the _byte[] -> 
> object_ and _object -> byte[]_ conversion with protobuf.
> For example in KeyManagerImpl the OmKeyInfo.getFromProtobuf is used multiple 
> times to deserialize the OmKeyInfo project.
>  
> *I propose to create a type-safe table* with using:
> {code:java}
> public interface Table<KEY_TYPE, VALUE_TYPE> extends AutoCloseable
> {code}
> The put and get could be modified to:
> {code:java}
> VALUE_TYPE[] get(KEY_TYPE[] key) throws IOException;
> void put(KEY_TYPE[] key, VALUE_TYPE value)
> {code}
> For example for the key table it could be:
> {code:java}
> OmKeyInfo get(String key) throws IOException;
> void put(String key, OmKeyInfo value)
> {code}
>  
> It requires to register internal codec (marshaller/unmarshaller) 
> implementations during the creation of (..)Table.
> The registration of the codecs would be optional. Without it the Table could 
> work as now (using byte[],byte[])
> *Advantages*:
>  * More simplified code (Don't need to repeat the serialization everywhere) 
> less error-prone.
>  * Clear separation of the layers (As of now I can't see the serialization 
> overhead with OpenTracing) and measurablity). Easy to test different 
> serialization in the future.
>  * Easier to create additional developer tools to investigate the current 
> state of the rocksdb metadata stores. We had SQLCLI to export all the data to 
> sql, but with registering the format in the rocksdb table we can easily 
> create a calcite based SQL console.
> *Additional info*:
> I would modify the interface of the DBStoreBuilder and DBStore:
> {code:java}
>    this.store = DBStoreBuilder.newBuilder(conf)
>         .setName(OM_DB_NAME)
>         .setPath(Paths.get(metaDir.getPath()))
>         .addTable(KEY_TABLE, DBUtil.STRING_KEY_CODEC, new OmKeyInfoCoder())
> //...
>         .build();
> {code}
> And using it from the DBStore:
> {code:java}
> //default, without codec
> Table<byte[],byte[]> getTable(String name) throws IOException;
> //advanced with codec from the codec registry
> Table<String,OmKeyInfo> getTable(String name, Class keyType, Class valueType);
> //for example
> table.getTable(KEY_TABLE,String.class,OmKeyInfo.class);
> //or
> table.getTable(KEY_TABLE,String.class,UserInfo.class)
> //exception is thrown: No codec is registered for KEY_TABLE with type 
> UserInfo.{code}
> *Priority*:
> I think it's a very useful and valuable step forward but the real priority is 
> lower. Ideal for new contributors especially as it's independent, standalone 
> part of ozone code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to