[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790834#comment-16790834 ] Hudson commented on HDDS-1043: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16188 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16188/]) HDDS-1043. Enable token based authentication for S3 api (elek: rev dcb0de848d8388fcee425847e52f6361790225c6) * (add) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManager.java * (edit) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/S3ErrorTable.java * (edit) hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/header/TestAuthorizationHeaderV4.java * (add) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneServiceProvider.java * (delete) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3SecretManager.java * (add) hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestAWSV4AuthValidator.java * (add) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSV4AuthParser.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/S3SecretValue.java * (add) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSAuthParser.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java * (edit) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/header/AuthorizationHeaderV4.java * (edit) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneTokenIdentifier.java * (edit) hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot * (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml * (delete) hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/header/AWSConstants.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java * (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto * (edit) hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSecretManager.java * (add) hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestOzoneClientProducer.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneSecurityException.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java * (edit) hadoop-ozone/dist/src/main/smoketest/test.sh * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java * (add) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java * (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java * (add) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java * (delete) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3SecretManagerImpl.java * (edit) hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Blocker > Labels: pull-request-available, security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, > HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch > > Time Spent: 29.5h > Remaining Estimate: 0h > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781393#comment-16781393 ] Hadoop QA commented on HDDS-1043: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 35s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m 52s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 53s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 38s{color} | {color:red} dist in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test hadoop-ozone/dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 21s{color} | {color:red} dist in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 23s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 18m 23s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 23s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 38s{color} | {color:red} dist in the patch failed. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 32s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 3 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test hadoop-ozone/dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s{color} | {color:green} ozone-manager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s{color} | {color:green} s3gateway in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781296#comment-16781296 ] Ajay Kumar commented on HDDS-1043: -- patch v7 to fix jenkins warnings. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Blocker > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, > HDDS-1043.05.patch, HDDS-1043.06.patch, HDDS-1043.07.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781220#comment-16781220 ] Hadoop QA commented on HDDS-1043: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 6s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 4s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 55s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 37s{color} | {color:red} dist in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test hadoop-ozone/dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s{color} | {color:red} dist in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 16m 21s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 21s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 21s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 55s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 37s{color} | {color:red} dist in the patch failed. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 34s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch 3 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test hadoop-ozone/dist {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 58s{color} | {color:red} hadoop-ozone/s3gateway generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s{color} | {color:green} common in the patch passed. {color}
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781103#comment-16781103 ] Ajay Kumar commented on HDDS-1043: -- [~elek], [~bharatviswa] thanks for reviews. Addressed them in patch 6. {quote}1.) I am big +1 about the s/ozoneManager/om/ rename in the docker files. But it would be easier to do in a separate jira IMHO (and this patch could be smaller to review). I would immediately commit that one...{quote} Reverted the changes for other docker files. Change in smoketest#test.sh will result in failure of other smoketests but is required to test this patch via robot tests added in patch. {quote}2.) Until now it was possible to execute the s3g robot tests with using real AWS endpoint url. We used it to prove that our tests are valid (they should work in the same way with s3 or with ozone). It's not clear how can we do it the the future after this patch. I think the kinit part should be moved out from the aws test or should be made optional. 3.) NIT: sudo yum install -y krb5-user --> fix me If I am wrong but I think the name of the package is krb5-workstation. But thanks to Xiaoyu Yao it is not required any more as it's added to the base image.{quote} Reverted change in comminlib as we have test in "ozone-secure.robot". {quote}4.) NIT2: There are a few strange names (strange for me): OZONE_S3_TOKEN_MAX_DATE_DEFAULT (I think it's not a date but a time period, and it seems to be some ttl or expiry not a maximum) TIME_FORMATTER_FORMATTER: I think it's an RFC???_TIME_FORMATTER (don't know the name of the exact pattern){quote} Changed them to OZONE_S3_TOKEN_MAX_LIFETIME_KEY_DEFAULT and TIME_FORMATTER. [~bharatviswa]' {quote}I had the same comment as marton, now we are doing kinit and setting up the v4 headers. I think here if we want to make these tests to work with aws s3 endpoint and non-secure ozone cluster we can use if ozone.security.enabled flag and then do accordingly.{quote} With revert of those changes in commonawslib.robot i think this is not applicable anymore. Let me know if i am missing something. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, > HDDS-1043.05.patch, HDDS-1043.06.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780853#comment-16780853 ] Bharat Viswanadham commented on HDDS-1043: -- Thank You [~ajayydv] for the update. Could you please rebase the patch, it is no more applying on top of latest trunk. # I had the same comment as marton, now we are doing kinit and setting up the v4 headers. I think here if we want to make these tests to work with aws s3 endpoint and non-secure ozone cluster we can use if ozone.security.enabled flag and then do accordingly. Will look into other changes, after the rebase. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, HDDS-1043.05.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16780792#comment-16780792 ] Elek, Marton commented on HDDS-1043: Thanks [~ajayydv] to work in this. I am very exciting to get this. I just started to test it and I have some initial comments: 1.) I am big +1 about the s/ozoneManager/om/ rename in the docker files. But it would be easier to do in a separate jira IMHO (and this patch could be smaller to review). I would immediately commit that one... 2.) Until now it was possible to execute the s3g robot tests with using real AWS endpoint url. We used it to prove that our tests are valid (they should work in the same way with s3 or with ozone). It's not clear how can we do it the the future after this patch. I think the kinit part should be moved out from the aws test or should be made optional. 3.) NIT: sudo yum install -y krb5-user --> fix me If I am wrong but I think the name of the package is krb5-workstation. But thanks to [~xyao] it is not required any more as it's added to the base image. 4.) NIT2: There are a few strange names (strange for me): * OZONE_S3_TOKEN_MAX_DATE_DEFAULT (I think it's not a date but a time period, and it seems to be some ttl or expiry not a maximum) * TIME_FORMATTER_FORMATTER: I think it's an RFC???_TIME_FORMATTER (don't know the name of the exact pattern) > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, HDDS-1043.05.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779986#comment-16779986 ] Hadoop QA commented on HDDS-1043: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HDDS-1043 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDDS-1043 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12960399/HDDS-1043.05.patch | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2400/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, HDDS-1043.05.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779500#comment-16779500 ] Ajay Kumar commented on HDDS-1043: -- patch v5 to rebase with trunk. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch, HDDS-1043.04.patch, HDDS-1043.05.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777284#comment-16777284 ] Ajay Kumar commented on HDDS-1043: -- [~anu] thanks for review and detailed comment. I think your main concern is lack of validation. This patch doesn't validate http request. Created [HDDS-1177] to handle it separately. Will update this jira once HDDS-1177 is committed. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777264#comment-16777264 ] Bharat Viswanadham commented on HDDS-1043: -- I feel, we have already AuthorizationHeaderV4.java, we can use that to parse AuthorizationHeader, and for remaining to compute stringtoSign and other we can add those to the existing class. As if it is not according to AWS authHeader V4 spec we throw out errors. But we have not parsed credential value, to check it has all the correct entries. We can incorporate [~anu] comments there. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776560#comment-16776560 ] Hadoop QA commented on HDDS-1043: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 35s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 24m 2s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 45s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 38s{color} | {color:red} dist in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 36s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} dist in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 22s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 18m 22s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 22s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 59s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 37s{color} | {color:red} dist in the patch failed. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 37s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 4 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/dist {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s{color} | {color:red} hadoop-ozone/s3gateway generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 31s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s{color} | {color:green} ozone-manager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s{color} | {color:green} s3gateway in the patch passed. {color} | |
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776521#comment-16776521 ] Anu Engineer commented on HDDS-1043: [~ajayydv] Thanks for working on this patch. I have truly reviewed only 2 files. Primarily the AWSV4AuthParser.java and the parent interface. I have a bunch of comments, so I am leaving them here, and I can add my comments on other parts of the code later. 1. I am going to mix some style advice with the code review. Here is a section in the code: {code} auth = StringUtils.split(headerMap.getFirst(AUTHORIZATION_HEADER), ','); {code} The reader of the code finds it hard to parse this, or even judge if what you are doing is correct since the format is not known. This line is followed by which looks suspicious since you log an error message, but continue. The reader of the code is left wondering if this is a coding error, by design etc. {code} if (auth.length < 3) { LOG.error("Authorization header in unexpected format. Auth header:{}", headerMap.get(AUTHORIZATION_HEADER)); } authParser = new AuthorizationHeaderV4( headerMap.getFirst(AUTHORIZATION_HEADER)); {code} So the next thing the reader has to do is to look the link that is the top of the source code comments. Unfortunately, that page does not link has no information about this header. I would like to propose that we rewrite this expression as follows: {code} // According to AWS sigv4 documentation, authorization header should be // in following format. // Authorization: algorithm Credential=access key ID/credential scope, // SignedHeaders=SignedHeaders, Signature=signature // StringToSign = //Algorithm + \n + //RequestDateTime + \n + //CredentialScope + \n + //HashedCanonicalRequest parseAuthorizationHeader(Map headerMap) { Precondition.notNull(headerMap); String auth = headerMap.getFirst(AUTHORIZATION_HEADER); // --> Error check here needed. // This check is missing in the current code -- if(StringUtils.isNullorEmpty(auth)) { return Error; } String [] authFields = StringUtils.split(auth, ',''); if (auth.authFields < AUTH_FIELD_COUNT) { LOG.error("Authorization header in unexpected format. Auth header:{}", headerMap.get(AUTHORIZATION_HEADER)); // --> Error, Return needed here. We cannot continue. } // Feel free to return an AuthorizationHeaderV4 here, // but let us not parse this string again. We have broken this up already into // it parts. Parsing a plain string is slow, dangerous and error-prone. // Since we have already done that, let us go and use the parsed results to // build an AuthorizationHeaderV4 header here. } {code} 2. I think the stingToSign function is better renamed as parse. 3. AWSV4AuthParser#getStringToSign -- 3.1 First Field in the V4 signature can have one and only one value according to Amazon. {code} algorithm = auth[0].substring(0, auth[0].indexOf(" ")); {code} Hence I propose a simple trim function followed by a comparison to the only supported Algorithm Something like Here is the reference that I am using for this Signature. https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html {code} algorithm = auth[0].trim(); if (!algorith.equals(AWS_V4_ALGO)) { // --->Error. There is no point in going forward since the Algorithm does // not match the only know Algorithm function. // when we support other signature versions, this field should be split on '-' // so we can extract the signature version independently. } Unfortunately, it is not that simple, since the next line, Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request is only separated by space. So what we really need to do is to tokenize the auth[0], and then trim the first value. So you would end up rewriting this part as {code} String [] creds = auth[0].split("\\s"); algorithm = creds[0]; if(!algorithm.equals ...) {code} 4. I am going to presume that the comment above is not correct, Line 81 in AWSV4AuthParser. Since the next line should be a key=value pair which should look like this. >From >https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html {code} Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request {code} Since we just did the creds split in the above line, let us write a function that handles the parsing of credentials some thing like : {code} parseCredentials(String credential) { // A credential is a key value pair that looks like
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776515#comment-16776515 ] Ajay Kumar commented on HDDS-1043: -- patch v3 to address jenkins issues. Also removed junit test from {{TestSecureOzoneCluster}}, it is covered in robot test. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch, HDDS-1043.03.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776164#comment-16776164 ] Hadoop QA commented on HDDS-1043: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 50s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 45s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 35s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 42s{color} | {color:red} dist in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test hadoop-ozone/dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 35s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} dist in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 16m 6s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 6s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 6s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 10s{color} | {color:orange} root: The patch generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s{color} | {color:red} dist in the patch failed. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 1s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 35s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 4 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test hadoop-ozone/dist {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 14s{color} | {color:red} hadoop-ozone/common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 50s{color} | {color:red} hadoop-ozone/s3gateway generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 47s{color} | {color:red} common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} |
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776138#comment-16776138 ] Ajay Kumar commented on HDDS-1043: -- [~bharatviswa] thanks for review. {quote}Can we use already existing Header parsers AuthorizationHeaderV4 and AuthorizationHeaderV2.java instead of parsing it again in new class AWSV4AuthParser. Same comment for V2 parser. And also can we added reference links, so that it will be easy to refer aws header documentation. And also we have AuthenticationHeaderParser which checks type V2 and V4. And then do required. I think we should do similar checks in OzoneClientProducer.java and then create token? {quote} Done {quote}In OzoneDelegationTokenSecretManager.java, we call getS3Secret by awsaccesskeyid, but during createS3Secret we pass user login name. I think this logic should be modified. {quote} client can configure aws access id to whatever id they received during s3 secret generation. {code}License Header for new classes is wrongly added, it has some GPL header. This needs to be updated.\{code} {quote}Thanks for catching, done. Can we add end to end robot test to make sure whether this header parsing and validation is happening correctly or not. Already we have tests which configure s3 robot tests.(Where we have configured, some random values, now this can be set using crateS3secret) Or to have a more robust testing, we can have all S3 tests run with secure cluster. I think 2nd approach will be good to have. {quote} Done. Adding robot test to secure suite resulted in some overflowing changes to other scripts as well. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch, > HDDS-1043.02.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774609#comment-16774609 ] Bharat Viswanadham commented on HDDS-1043: -- Hi [~ajayydv] Few comments I have: # Can we use already existing Header parsers AuthorizationHeaderV4 and AuthorizationHeaderV2.java instead of parsing it again in new class AWSV4AuthParser. Same comment for V2 parser. And also can we added reference links, so that it will be easy to refer was header documentation. # And also we have AuthenticationHeaderParser which checks type V2 and V4. And then do required. I think we should do similar checks in OzoneClientProducer.java and then create token? # In OzoneDelegationTokenSecretManager.java, we call getS3Secret by awsaccesskeyid, but during createS3Secret we pass user login name. I think this logic should be modified. # License Header for new classes is wrongly added, it has some GPL header. This needs to be updated. # Can we add end to end robot test to make sure whether this header parsing and validation is happening correctly or not. Already we have tests which configure s3 robot tests.(Where we have configured, some random values, now this can be set using crateS3secret) Or to have a more robust testing, we can have all S3 tests run with secure cluster. I think 2nd approach will be good to have. I am still trying to understand the server side validation, will update if I have any more comments. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773642#comment-16773642 ] Ajay Kumar commented on HDDS-1043: -- patch v1 with token creation in S3gateway and validation on OM secret manager. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch, HDDS-1043.01.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16762032#comment-16762032 ] Ajay Kumar commented on HDDS-1043: -- cancelling the patch to make some changes in current approach. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Labels: security > Fix For: 0.4.0 > > Attachments: HDDS-1043.00.patch > > > Ozone has a S3 api and mechanism to create S3 like secrets for user. This > jira proposes hadoop compatible token based authentication for S3 api which > utilizes S3 secret stored in OM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1043) Enable token based authentication for S3 api
[ https://issues.apache.org/jira/browse/HDDS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758823#comment-16758823 ] Hadoop QA commented on HDDS-1043: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} root: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 52s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 17s{color} | {color:green} hadoop-hdds in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDDS-1043 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12957346/HDDS-1043.00.patch | | Optional Tests | asflicense javac javadoc unit findbugs checkstyle | | uname | Linux e81b2064a26c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 2c13513 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | checkstyle | https://builds.apache.org/job/PreCommit-HDDS-Build/2169/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2169/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2169/testReport/ | | Max. process+thread count | 201 (vs. ulimit of 1) | | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2169/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Enable token based authentication for S3 api > > > Key: HDDS-1043 > URL: https://issues.apache.org/jira/browse/HDDS-1043 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major >