[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158093#comment-16158093 ] Hudson commented on HDFS-12400: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12820 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12820/]) HDFS-12400. Provide a way for NN to drain the local key cache before (xiao: rev b3a4d7d2a01051e166c06ee78e8c6e8df1341948) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirEncryptionZoneOp.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryptionWithKMS.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java > Provide a way for NN to drain the local key cache before re-encryption > -- > > Key: HDFS-12400 > URL: https://issues.apache.org/jira/browse/HDFS-12400 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12400.01.patch, HDFS-12400.02.patch > > > In HDFS-12359, a fix for the KMS ACLs required for re-encryption was done. As > part of the fix, the following code is used to make sure the local provider > cache in the NN is drained. > {code:java} > if (dir.getProvider() instanceof CryptoExtension) { > ((CryptoExtension) dir.getProvider()).drain(keyName); > } > {code} > This doesn't work, because the provider is {{KeyProviderCryptoExtension}} > instead of {{CryptoExtension}} - the latter is composite of the former. > Unfortunately unit test didn't catch this, because it conveniently rolled the > from the NN's provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158076#comment-16158076 ] Xiao Chen commented on HDFS-12400: -- I think the timed out {{TestReencryptionWithKMS}} is more likely env: ran locally several times, not reproduced. Other failed tests are not related to this patch. Ran {noformat}mvn clean test -Dtest=TestReadStripedFileWithMissingBlocks,TestDFSAdminWithHA,TestDirectoryScanner,TestReconstructStripedFile,TestReencryptionWithKMS,TestWriteReadStripedFile,TestNameNodeStatusMXBean,TestEditLogRace,TestAuditLogs{noformat}, all passed. Checkstyle is {{TestReencryption#dfsAdmin}} should be private instead of protected, but IMO that's also not a good idea, because this follows {{TestEncryptionZones}} for consistency, and accessing a protected member from child test class reads better than a getter method. > Provide a way for NN to drain the local key cache before re-encryption > -- > > Key: HDFS-12400 > URL: https://issues.apache.org/jira/browse/HDFS-12400 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12400.01.patch, HDFS-12400.02.patch > > > In HDFS-12359, a fix for the KMS ACLs required for re-encryption was done. As > part of the fix, the following code is used to make sure the local provider > cache in the NN is drained. > {code:java} > if (dir.getProvider() instanceof CryptoExtension) { > ((CryptoExtension) dir.getProvider()).drain(keyName); > } > {code} > This doesn't work, because the provider is {{KeyProviderCryptoExtension}} > instead of {{CryptoExtension}} - the latter is composite of the former. > Unfortunately unit test didn't catch this, because it conveniently rolled the > from the NN's provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158078#comment-16158078 ] Xiao Chen commented on HDFS-12400: -- Committing this, thanks much [~jojochuang]! > Provide a way for NN to drain the local key cache before re-encryption > -- > > Key: HDFS-12400 > URL: https://issues.apache.org/jira/browse/HDFS-12400 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12400.01.patch, HDFS-12400.02.patch > > > In HDFS-12359, a fix for the KMS ACLs required for re-encryption was done. As > part of the fix, the following code is used to make sure the local provider > cache in the NN is drained. > {code:java} > if (dir.getProvider() instanceof CryptoExtension) { > ((CryptoExtension) dir.getProvider()).drain(keyName); > } > {code} > This doesn't work, because the provider is {{KeyProviderCryptoExtension}} > instead of {{CryptoExtension}} - the latter is composite of the former. > Unfortunately unit test didn't catch this, because it conveniently rolled the > from the NN's provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157976#comment-16157976 ] Hadoop QA commented on HDFS-12400: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 59s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 22s{color} | {color:orange} root: The patch generated 1 new + 192 unchanged - 0 fixed = 193 total (was 192) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 47s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 59s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}174m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157908#comment-16157908 ] Wei-Chiu Chuang commented on HDFS-12400: +1 pending Jenkins > Provide a way for NN to drain the local key cache before re-encryption > -- > > Key: HDFS-12400 > URL: https://issues.apache.org/jira/browse/HDFS-12400 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12400.01.patch, HDFS-12400.02.patch > > > In HDFS-12359, a fix for the KMS ACLs required for re-encryption was done. As > part of the fix, the following code is used to make sure the local provider > cache in the NN is drained. > {code:java} > if (dir.getProvider() instanceof CryptoExtension) { > ((CryptoExtension) dir.getProvider()).drain(keyName); > } > {code} > This doesn't work, because the provider is {{KeyProviderCryptoExtension}} > instead of {{CryptoExtension}} - the latter is composite of the former. > Unfortunately unit test didn't catch this, because it conveniently rolled the > from the NN's provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157836#comment-16157836 ] Wei-Chiu Chuang commented on HDFS-12400: Okay I see. For cancellation, EncryptionZoneManager#cancelReencryptEncryptionZone prints an INFO level log. So that's good. The test change looks good to me too. > Provide a way for NN to drain the local key cache before re-encryption > -- > > Key: HDFS-12400 > URL: https://issues.apache.org/jira/browse/HDFS-12400 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12400.01.patch, HDFS-12400.02.patch > > > In HDFS-12359, a fix for the KMS ACLs required for re-encryption was done. As > part of the fix, the following code is used to make sure the local provider > cache in the NN is drained. > {code:java} > if (dir.getProvider() instanceof CryptoExtension) { > ((CryptoExtension) dir.getProvider()).drain(keyName); > } > {code} > This doesn't work, because the provider is {{KeyProviderCryptoExtension}} > instead of {{CryptoExtension}} - the latter is composite of the former. > Unfortunately unit test didn't catch this, because it conveniently rolled the > from the NN's provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157761#comment-16157761 ] Xiao Chen commented on HDFS-12400: -- Thank you for the review Wei-Chiu! bq. nit That only happens for start, so no need to log I think. :) As chatted offline, {{flush()}} is technically required only for the JavaKeyStoreProvider. For the tests, we need to flush if the key is rolled and we want to generate new edeks from JKSP. Looking at the test code, I think I can do better. In patch 2, key rollover is exacted to a method and done differently for JKSP and KMSCP. This is to let JKSP tests still pass, yet KMSCP cases the same as real cluster. Also fixed the checkstyle. > Provide a way for NN to drain the local key cache before re-encryption > -- > > Key: HDFS-12400 > URL: https://issues.apache.org/jira/browse/HDFS-12400 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12400.01.patch, HDFS-12400.02.patch > > > In HDFS-12359, a fix for the KMS ACLs required for re-encryption was done. As > part of the fix, the following code is used to make sure the local provider > cache in the NN is drained. > {code:java} > if (dir.getProvider() instanceof CryptoExtension) { > ((CryptoExtension) dir.getProvider()).drain(keyName); > } > {code} > This doesn't work, because the provider is {{KeyProviderCryptoExtension}} > instead of {{CryptoExtension}} - the latter is composite of the former. > Unfortunately unit test didn't catch this, because it conveniently rolled the > from the NN's provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157205#comment-16157205 ] Wei-Chiu Chuang commented on HDFS-12400: Very trivial nit: {code} LOG.info("Re-encryption using key version " + keyVersionName + " for zone " + zone); {code} Can we also log the {{action}} in this message? I am a little unsure why it has to remove flush() after rollNewVersion(). Could you elaborate? Thanks. > Provide a way for NN to drain the local key cache before re-encryption > -- > > Key: HDFS-12400 > URL: https://issues.apache.org/jira/browse/HDFS-12400 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12400.01.patch > > > In HDFS-12359, a fix for the KMS ACLs required for re-encryption was done. As > part of the fix, the following code is used to make sure the local provider > cache in the NN is drained. > {code:java} > if (dir.getProvider() instanceof CryptoExtension) { > ((CryptoExtension) dir.getProvider()).drain(keyName); > } > {code} > This doesn't work, because the provider is {{KeyProviderCryptoExtension}} > instead of {{CryptoExtension}} - the latter is composite of the former. > Unfortunately unit test didn't catch this, because it conveniently rolled the > from the NN's provider. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12400) Provide a way for NN to drain the local key cache before re-encryption
[ https://issues.apache.org/jira/browse/HDFS-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156175#comment-16156175 ] Hadoop QA commented on HDFS-12400: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 13s{color} | {color:orange} root: The patch generated 1 new + 192 unchanged - 0 fixed = 193 total (was 192) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 14s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}210m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestShellBasedUnixGroupsMapping | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.tools.TestDFSHAAdminMiniCluster | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12400 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12885656/HDFS-12400.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 802b821529ef 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1f3bc63 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle |