[jira] [Commented] (HDFS-15230) Sanity check should not assume key base name can be derived from version name
[ https://issues.apache.org/jira/browse/HDFS-15230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411426#comment-17411426 ] Jason Wen commented on HDFS-15230: -- +1 for fixing this issue. We also hit same issue with our custom KeyProvider. We can keep this sanity check but should allow a configuration option to skip this check. > Sanity check should not assume key base name can be derived from version name > - > > Key: HDFS-15230 > URL: https://issues.apache.org/jira/browse/HDFS-15230 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Priority: Major > > HDFS-14884 checks if the encryption info of a file matches the encryption > zone key. > {code} > if (!KeyProviderCryptoExtension. > getBaseName(keyVersionName).equals(zoneKeyName)) { > throw new IllegalArgumentException(String.format( > "KeyVersion '%s' does not belong to the key '%s'", > keyVersionName, zoneKeyName)); > } > {code} > Here it assumes the "base name" can be derived from key version name, and > that the base name should be the same as zone key. > However, there is no published definition of what a key version name should > be. > While the code works for the builtin JKS key provider, it may not work for > other kind of key providers. (Specifically, it breaks Cloudera's KeyTrustee > KMS KeyProvider) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15916) Backward compatibility - Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308776#comment-17308776 ] Jason Wen commented on HDFS-15916: -- Yes, it makes sense for sure if we can make it work with the mentioned fix. In general I would like to see the compatibility between 3.x client to 2.x server, but I am not sure if there is any more road blocks to achieve that. If for this particular case we can make it work with few changes, it's definitely worth fixing that. > Backward compatibility - Distcp fails from Hadoop 3 to Hadoop 2 for > snapshotdiff > > > Key: HDFS-15916 > URL: https://issues.apache.org/jira/browse/HDFS-15916 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.2.2 >Reporter: Srinivasu Majeti >Priority: Major > > Looks like when using distcp diff options between two snapshots from a hadoop > 3 cluster to hadoop 2 cluster , we get below exception and seems to be break > backward compatibility due to new API introduction > getSnapshotDiffReportListing. > > {code:java} > hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): > Unknown method getSnapshotDiffReportListing called on > org.apache.hadoop.hdfs.protocol.ClientProtocol protocol > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15916) Backward compatibility - Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308221#comment-17308221 ] Jason Wen commented on HDFS-15916: -- Is it supposed to be supported between Hadoop 3.x client and 2.x server? In this case, distcp source is Hadoop 3 HDFS client and destination is Hadoop 2 HDFS server. > Backward compatibility - Distcp fails from Hadoop 3 to Hadoop 2 for > snapshotdiff > > > Key: HDFS-15916 > URL: https://issues.apache.org/jira/browse/HDFS-15916 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.2.2 >Reporter: Srinivasu Majeti >Priority: Major > > Looks like when using distcp diff options between two snapshots from a hadoop > 3 cluster to hadoop 2 cluster , we get below exception and seems to be break > backward compatibility due to new API introduction > getSnapshotDiffReportListing. > > {code:java} > hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): > Unknown method getSnapshotDiffReportListing called on > org.apache.hadoop.hdfs.protocol.ClientProtocol protocol > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6869) Logfile mode in secure datanode is expected to be 644
[ https://issues.apache.org/jira/browse/HDFS-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17278496#comment-17278496 ] Jason Wen commented on HDFS-6869: - I have the same question. When using CDH6.3.2 the secure DN log permission is 600, but when using CDH5.16.2 it is 644. > Logfile mode in secure datanode is expected to be 644 > - > > Key: HDFS-6869 > URL: https://issues.apache.org/jira/browse/HDFS-6869 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Guo Ruijing >Priority: Major > > Logfile mode in secure datanode is expected to be 644. > [root@centos64-2 hadoop-hdfs]# ll > total 136 > -rw-r--r-- 1 hdfs hadoop 19455 Aug 18 22:40 > hadoop-hdfs-journalnode-centos64-2.log > -rw-r--r-- 1 hdfs hadoop 718 Aug 18 22:38 > hadoop-hdfs-journalnode-centos64-2.out > -rw-r--r-- 1 hdfs hadoop 62316 Aug 18 22:40 > hadoop-hdfs-namenode-centos64-2.log > -rw-r--r-- 1 hdfs hadoop 718 Aug 18 22:38 > hadoop-hdfs-namenode-centos64-2.out > -rw-r--r-- 1 hdfs hadoop 15485 Aug 18 22:39 hadoop-hdfs-zkfc-centos64-2.log > -rw-r--r-- 1 hdfs hadoop 718 Aug 18 22:39 hadoop-hdfs-zkfc-centos64-2.out > drwxr-xr-x 2 hdfs root4096 Aug 18 22:38 hdfs > -rw-r--r-- 1 hdfs hadoop 0 Aug 18 22:38 hdfs-audit.log > -rw-r--r-- 1 hdfs hadoop 15029 Aug 18 22:40 SecurityAuth-hdfs.audit > [root@centos64-2 hadoop-hdfs]# > [root@centos64-2 hadoop-hdfs]# > [root@centos64-2 hadoop-hdfs]# ll hdfs/ > total 52 > -rw--- 1 hdfs hadoop 43526 Aug 18 22:41 > hadoop-hdfs-datanode-centos64-2.log <<< expected -rw-r--r-- (same with > namenode log) > -rw-r--r-- 1 root root 734 Aug 18 22:38 > hadoop-hdfs-datanode-centos64-2.out > -rw--- 1 root root 343 Aug 18 22:38 jsvc.err > -rw--- 1 root root 0 Aug 18 22:38 jsvc.out > -rw--- 1 hdfs hadoop 0 Aug 18 22:38 SecurityAuth-hdfs.audit -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15786) Minor improvement use isEmpty
[ https://issues.apache.org/jira/browse/HDFS-15786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17273201#comment-17273201 ] Jason Wen commented on HDFS-15786: -- I see the PR is all about String objects method change. For String object string.isEmpty() is equivalent to string.length() == 0. It does not make any difference, still O(1) > Minor improvement use isEmpty > - > > Key: HDFS-15786 > URL: https://issues.apache.org/jira/browse/HDFS-15786 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Arturo Bernal >Assignee: Arturo Bernal >Priority: Minor > Labels: pull-request-available > Time Spent: 3.5h > Remaining Estimate: 0h > > Use isEmpty instead size() > o. > > {{size()}} can be *O(1)* or *O(N)*, depending on the {{data structure}}; > {{.isEmpty()}} is never *O(N)*. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org