[ 
https://issues.apache.org/jira/browse/HDDS-748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694676#comment-16694676
 ] 

Hadoop QA commented on HDDS-748:
--------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 48s{color} | {color:orange} root: The patch generated 4 new + 0 unchanged - 
0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-748 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949020/HDDS-748.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3ff6e354dcc0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f63e4e4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1784/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1784/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1784/artifact/out/patch-unit-hadoop-ozone_common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1784/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1784/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1784/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 10000) |
| modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1784/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use strongly typed metadata Table implementation
> ------------------------------------------------
>
>                 Key: HDDS-748
>                 URL: https://issues.apache.org/jira/browse/HDDS-748
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>         Attachments: HDDS-748.001.patch, HDDS-748.002.patch
>
>
> NOTE: This issue is a proposal. I assigned it to myself to make it clear that 
> it's not ready to implement, I just start a discussion about the proposed 
> change.
> org.apache.hadoop.utils.db.DBStore (from HDDS-356) is a new generation 
> MetadataStore to store all persistent state of hdds/ozone scm/om/datanodes.
> It supports column families with via the Table interface which supports 
> methods like:
> {code:java}
> byte[] get(byte[] key) throws IOException;
> void put(byte[] key, byte[] value)
> {code}
> In our current code we usually use static helpers to do the _byte[] -> 
> object_ and _object -> byte[]_ conversion with protobuf.
> For example in KeyManagerImpl the OmKeyInfo.getFromProtobuf is used multiple 
> times to deserialize the OmKeyInfo project.
>  
> *I propose to create a type-safe table* with using:
> {code:java}
> public interface Table<KEY_TYPE, VALUE_TYPE> extends AutoCloseable
> {code}
> The put and get could be modified to:
> {code:java}
> VALUE_TYPE[] get(KEY_TYPE[] key) throws IOException;
> void put(KEY_TYPE[] key, VALUE_TYPE value)
> {code}
> For example for the key table it could be:
> {code:java}
> OmKeyInfo get(String key) throws IOException;
> void put(String key, OmKeyInfo value)
> {code}
>  
> It requires to register internal codec (marshaller/unmarshaller) 
> implementations during the creation of (..)Table.
> The registration of the codecs would be optional. Without it the Table could 
> work as now (using byte[],byte[])
> *Advantages*:
>  * More simplified code (Don't need to repeat the serialization everywhere) 
> less error-prone.
>  * Clear separation of the layers (As of now I can't see the serialization 
> overhead with OpenTracing) and measurablity). Easy to test different 
> serialization in the future.
>  * Easier to create additional developer tools to investigate the current 
> state of the rocksdb metadata stores. We had SQLCLI to export all the data to 
> sql, but with registering the format in the rocksdb table we can easily 
> create a calcite based SQL console.
> *Additional info*:
> I would modify the interface of the DBStoreBuilder and DBStore:
> {code:java}
>    this.store = DBStoreBuilder.newBuilder(conf)
>         .setName(OM_DB_NAME)
>         .setPath(Paths.get(metaDir.getPath()))
>         .addTable(KEY_TABLE, DBUtil.STRING_KEY_CODEC, new OmKeyInfoCoder())
> //...
>         .build();
> {code}
> And using it from the DBStore:
> {code:java}
> //default, without codec
> Table<byte[],byte[]> getTable(String name) throws IOException;
> //advanced with codec from the codec registry
> Table<String,OmKeyInfo> getTable(String name, Class keyType, Class valueType);
> //for example
> table.getTable(KEY_TABLE,String.class,OmKeyInfo.class);
> //or
> table.getTable(KEY_TABLE,String.class,UserInfo.class)
> //exception is thrown: No codec is registered for KEY_TABLE with type 
> UserInfo.{code}
> *Priority*:
> I think it's a very useful and valuable step forward but the real priority is 
> lower. Ideal for new contributors especially as it's independent, standalone 
> part of ozone code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to