[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16063533#comment-16063533 ] Andrew Wang commented on HDFS-11956: +1 LGTM, I'll adjust the release notes as well. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch, > HDFS-11956.003.patch, HDFS-11956.004.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16063004#comment-16063004 ] Hadoop QA commented on HDFS-11956: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 33s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}104m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | | hadoop.hdfs.TestPread | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11956 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12874469/HDFS-11956.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9eaf573c5f1b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 379f19a | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/20041/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/20041/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/20041/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch,
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060103#comment-16060103 ] Andrew Wang commented on HDFS-11956: bq. Maybe the forward compatibility of StorageTypes is another JIRA I should raise? Sure, would certainly be great to support this if possible. If not, there is precedence for requiring a client upgrade to use new features, like encryption or EC. bq. What's your time frame for tagging alpha4? I'm going to be travelling for a while starting July 8th, so my hope was Monday June 26th so there's some slack in the schedule. Since reverting seems like an okay solution, we don't need to feel pressured for this JIRA. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch, > HDFS-11956.003.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060089#comment-16060089 ] Ewan Higgs commented on HDFS-11956: --- Hi Andrew {quote} IIUC, we know the storage type even for an old client since it passes it in the writeBlock request. Can an old client correctly pass along an unknown StorageType (e.g. PROVIDED)? {quote} I think you understood correctly. I don't think an old client will be able to deserialise a PROVIDED StorageType from the protobuf, so it will fail to pass along that StorageType (though I have not yet done the cross-version testing with Hadoop 2.6). I think this is the same as would be the case any time a new StorageType is introduced (e.g. if we hypothetically added {{StorageType.NVME}}, {{StorageType.SMR}}, etc.). Maybe the forward compatibility of StorageTypes is another JIRA I should raise? {quote} If so, then I see how this works; essentially, only require storageIDs when writing to provided storage. {quote} Yes. {quote} For 3.0.0-alpha4 I can also revert HDFS-9807 while we figure out this JIRA. We did this internally to unblock testing. {quote} I'm traveling today so I won't be able to furnish a patch just yet. What's your time frame for tagging alpha4? > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch, > HDFS-11956.003.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060001#comment-16060001 ] Andrew Wang commented on HDFS-11956: Thanks Ewan. I'm new to this feature, so IIUC, we know the storage type even for an old client since it passes it in the writeBlock request. Can an old client correctly pass along an unknown StorageType (e.g. PROVIDED)? If so, then I see how this works; essentially, only require storageIDs when writing to provided storage. For 3.0.0-alpha4 I can also revert HDFS-9807 while we figure out this JIRA. We did this internally to unblock testing. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch, > HDFS-11956.003.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059663#comment-16059663 ] Ewan Higgs commented on HDFS-11956: --- Hi, Another idea is to just ignore the BlockTokenIdentifier if the storageId list in the request is empty. The current intention of the storageId in the message is just a suggestion for the datanode in most cases; but in the case of provided storage (HDFS-9806) it will be the storageId of the provided storage system. If the storageId list is empty then it will just fail the write to the provided storage since it won't know where/how to write it. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch, > HDFS-11956.003.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058578#comment-16058578 ] Andrew Wang commented on HDFS-11956: Hey folks, could we close on this issue this week? I'm planning to cut alpha4 next week. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch, > HDFS-11956.003.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16052130#comment-16052130 ] Wei-Chiu Chuang commented on HDFS-11956: Hey Ewans, could you please elaborate a little bit more on this config key? For example, instead of "will allow older clients to access the system" maybe you can be more precise and say this will allow old clients (Hadoop 2.x) to access a Hadoop 3 cluster? Also, "but will prevent some newer features from working." might be better to mention you mean features added in Hadoop 3. But the way, what are the new features that would not work? Looking at HDFS-9807, looks like disabling it would break HSM block placement policy. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch, > HDFS-11956.003.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051255#comment-16051255 ] Hadoop QA commented on HDFS-11956: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 711 unchanged - 0 fixed = 714 total (was 711) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11956 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873187/HDFS-11956.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 60f266674f7c 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / fb68980 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19922/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19922/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051084#comment-16051084 ] Andrew Wang commented on HDFS-11956: LGTM, though we should also add an entry to hdfs-default.xml as documentation for this new option, and some of the checkstyles look fixable. Would appreciate if you could validate that the failed unit tests are unrelated (sadly, there are a lot). [~chris.douglas] do you want to review as well? > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049888#comment-16049888 ] Hadoop QA commented on HDFS-11956: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 711 unchanged - 0 fixed = 718 total (was 711) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11956 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873046/HDFS-11956.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d3243376df8b 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 999c8fc | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19909/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19909/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19909/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19909/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix BlockToken compatibility with Hadoop 2.x clients >
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049561#comment-16049561 ] Hadoop QA commented on HDFS-11956: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 711 unchanged - 0 fixed = 714 total (was 711) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 96m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.tools.TestHdfsConfigFields | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11956 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873002/HDFS-11956.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 14c8f2a52ed8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 999c8fc | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19906/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19906/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19906/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19906/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 >
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049514#comment-16049514 ] Andrew Wang commented on HDFS-11956: Thanks for working on this Ewan. Is it possible to add a unit test for this? > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049350#comment-16049350 ] Ewan Higgs commented on HDFS-11956: --- I took a look and see that this fails when writing blocks. e.g.: {code} hadoop-2.6.5/bin/hdfs dfs -copyFromLocal hello.txt / {code} This comes from the fact that the {{BlockTokenIdenfitier}} has the StorageID in there; but the StorageID is an optional field in the request which is new in 3.0. This means that it isn't passed in. Defaulting to 'null' and allowing this would of course defeat the purpose of the BlockTokenIdentifier, so I think this should be fixed with a bitflag (e.g. {{dfs.block.access.token.storageid.enable}}) which defaults to false and makes the [[BlockTokenSecretManager}} only use the storage id in the {{checkAccess}} call if it's enabled. This will allow old clients work; but it won't allow the system to take advantage of new features enabled by using the storage id in the write calls. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Chris Douglas >Priority: Blocker > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org