[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=575333=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575333 ] ASF GitHub Bot logged work on HADOOP-17618: --- Author: ASF GitHub Bot Created on: 01/Apr/21 05:57 Start Date: 01/Apr/21 05:57 Worklog Time Spent: 10m Work Description: sumangala-patki commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-811664415 Hi @steveloughran this is for privacy (no issue yet); the values masked identify the security principal (user/app) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575333) Time Spent: 1h (was: 50m) > ABFS: Partially obfuscate SAS object IDs in Logs > > > Key: HADOOP-17618 > URL: https://issues.apache.org/jira/browse/HADOOP-17618 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Delegation SAS tokens are created using various parameters for specifying > details such as permissions and validity. The requests are logged, along with > values of all the query parameters. This change will partially mask values > logged for the following object IDs representing the security principal: > skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sumangala-patki commented on pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs
sumangala-patki commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-811664415 Hi @steveloughran this is for privacy (no issue yet); the values masked identify the security principal (user/app) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17617) Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file
[ https://issues.apache.org/jira/browse/HADOOP-17617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312879#comment-17312879 ] Hadoop QA commented on HADOOP-17617: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 19s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 42s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 38m 45s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 56s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green}{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 41s{color} | {color:black}{color} | {color:black}{color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/178/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17617 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13023279/HADOOP-17617.002.patch | | Optional Tests | dupname asflicense mvnsite | | uname | Linux 5e40e84757a3 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 0665ce99308 | | Max. process+thread count | 606 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/178/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. > Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm > file > > > Key: HADOOP-17617 > URL: https://issues.apache.org/jira/browse/HADOOP-17617 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17617.001.patch, HADOOP-17617.002.patch > > > Format of RESPONSE of Get Key Versions in KMS index.md.vm is incorrect > https://hadoop.apache.org/docs/r3.1.1/hadoop-kms/index.html#Get_Key_Versions -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=575332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575332 ] ASF GitHub Bot logged work on HADOOP-17609: --- Author: ASF GitHub Bot Created on: 01/Apr/21 05:33 Start Date: 01/Apr/21 05:33 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811653214 JceSm4CtrCryptoCodec instead of OpensslSm4CtrCryptoCodec is used for 'SM4/CTR/NoPadding' since SM4 is not enabled in openssl. ``` $ openssl version OpenSSL 1.1.1g FIPS 21 Apr 2020 $ openssl enc -ciphers | grep -i sm4 $ bin/hadoop key create key-sm4 -cipher 'SM4/CTR/NoPadding' $ bin/hdfs dfs -mkdir /zone-sm4 $ bin/hdfs crypto -createZone -path /zone-sm4 -keyName key-sm4 $ bin/hdfs dfs -put README.txt /zone-sm4/ 2021-04-01 05:26:43,137 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:26:43,138 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:26:43,818 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:26:44,447 DEBUG util.PerformanceAdvisory: Crypto codec org.apache.hadoop.crypto.OpensslSm4CtrCryptoCodec is not available. 2021-04-01 05:26:44,447 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.JceSm4CtrCryptoCodec. ... $ bin/hdfs dfs -cat README.txt /zone-sm4/README.txt 2021-04-01 05:27:11,450 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:27:11,451 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:27:12,170 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. cat: `README.txt': No such file or directory 2021-04-01 05:27:12,662 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 05:27:12,665 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 05:27:12,686 DEBUG util.PerformanceAdvisory: Crypto codec org.apache.hadoop.crypto.OpensslSm4CtrCryptoCodec is not available. 2021-04-01 05:27:12,686 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.JceSm4CtrCryptoCodec. ... For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575332) Time Spent: 1h (was: 50m) > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=575331=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575331 ] ASF GitHub Bot logged work on HADOOP-17609: --- Author: ASF GitHub Bot Created on: 01/Apr/21 05:33 Start Date: 01/Apr/21 05:33 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811652915 OpensslAesCtrCryptoCodec is used for 'AES/CTR/NoPadding': ``` $ bin/hadoop key create key-aes -cipher 'AES/CTR/NoPadding' $ bin/hdfs dfs -mkdir /zone-aes $ bin/hdfs crypto -createZone -path /zone-aes -keyName key-aes $ bin/hdfs dfs -put README.txt /zone-aes/ 2021-04-01 05:23:37,755 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:23:37,756 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:23:38,457 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:23:39,072 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator. 2021-04-01 05:23:39,073 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec. ... $ bin/hdfs dfs -cat /zone-aes/README.txt 2021-04-01 05:23:52,844 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:23:52,845 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:23:53,549 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:23:54,084 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 05:23:54,087 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 05:23:54,111 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator. 2021-04-01 05:23:54,111 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec. ... For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575331) Time Spent: 50m (was: 40m) > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2847: HADOOP-17609. Make SM4 support optional for OpenSSL native code.
iwasakims commented on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811653214 JceSm4CtrCryptoCodec instead of OpensslSm4CtrCryptoCodec is used for 'SM4/CTR/NoPadding' since SM4 is not enabled in openssl. ``` $ openssl version OpenSSL 1.1.1g FIPS 21 Apr 2020 $ openssl enc -ciphers | grep -i sm4 $ bin/hadoop key create key-sm4 -cipher 'SM4/CTR/NoPadding' $ bin/hdfs dfs -mkdir /zone-sm4 $ bin/hdfs crypto -createZone -path /zone-sm4 -keyName key-sm4 $ bin/hdfs dfs -put README.txt /zone-sm4/ 2021-04-01 05:26:43,137 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:26:43,138 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:26:43,818 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:26:44,447 DEBUG util.PerformanceAdvisory: Crypto codec org.apache.hadoop.crypto.OpensslSm4CtrCryptoCodec is not available. 2021-04-01 05:26:44,447 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.JceSm4CtrCryptoCodec. ... $ bin/hdfs dfs -cat README.txt /zone-sm4/README.txt 2021-04-01 05:27:11,450 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:27:11,451 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:27:12,170 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. cat: `README.txt': No such file or directory 2021-04-01 05:27:12,662 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 05:27:12,665 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 05:27:12,686 DEBUG util.PerformanceAdvisory: Crypto codec org.apache.hadoop.crypto.OpensslSm4CtrCryptoCodec is not available. 2021-04-01 05:27:12,686 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.JceSm4CtrCryptoCodec. ... For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2847: HADOOP-17609. Make SM4 support optional for OpenSSL native code.
iwasakims commented on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811652915 OpensslAesCtrCryptoCodec is used for 'AES/CTR/NoPadding': ``` $ bin/hadoop key create key-aes -cipher 'AES/CTR/NoPadding' $ bin/hdfs dfs -mkdir /zone-aes $ bin/hdfs crypto -createZone -path /zone-aes -keyName key-aes $ bin/hdfs dfs -put README.txt /zone-aes/ 2021-04-01 05:23:37,755 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:23:37,756 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:23:38,457 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:23:39,072 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator. 2021-04-01 05:23:39,073 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec. ... $ bin/hdfs dfs -cat /zone-aes/README.txt 2021-04-01 05:23:52,844 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 2021-04-01 05:23:52,845 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 2021-04-01 05:23:53,549 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 2021-04-01 05:23:54,084 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 05:23:54,087 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 05:23:54,111 DEBUG crypto.OpensslAesCtrCryptoCodec: Using org.apache.hadoop.crypto.random.OpensslSecureRandom as random number generator. 2021-04-01 05:23:54,111 DEBUG util.PerformanceAdvisory: Using crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec. ... For the latest information about Hadoop, please visit our website at: http://hadoop.apache.org/ and our wiki, at: ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17617) Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file
[ https://issues.apache.org/jira/browse/HADOOP-17617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312861#comment-17312861 ] Ravuri Sushma sree commented on HADOOP-17617: - Attached HADOOP-17617.002 patch correcting the whitespace issue reported above. Kindly review > Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm > file > > > Key: HADOOP-17617 > URL: https://issues.apache.org/jira/browse/HADOOP-17617 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17617.001.patch, HADOOP-17617.002.patch > > > Format of RESPONSE of Get Key Versions in KMS index.md.vm is incorrect > https://hadoop.apache.org/docs/r3.1.1/hadoop-kms/index.html#Get_Key_Versions -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17617) Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file
[ https://issues.apache.org/jira/browse/HADOOP-17617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravuri Sushma sree updated HADOOP-17617: Attachment: HADOOP-17617.002.patch > Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm > file > > > Key: HADOOP-17617 > URL: https://issues.apache.org/jira/browse/HADOOP-17617 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17617.001.patch, HADOOP-17617.002.patch > > > Format of RESPONSE of Get Key Versions in KMS index.md.vm is incorrect > https://hadoop.apache.org/docs/r3.1.1/hadoop-kms/index.html#Get_Key_Versions -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception
hadoop-yetus commented on pull request #2775: URL: https://github.com/apache/hadoop/pull/2775#issuecomment-811633948 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 12s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 26s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/4/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 7m 29s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/4/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 79m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.mapred.TestJobEndNotifier | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2775 | | JIRA Issue | MAPREDUCE-7329 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b26c9dc0913a 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9fce5d15c8457e30c02c31422458ee7d8debbe13 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
[jira] [Work logged] (HADOOP-17471) ABFS to collect IOStatistics
[ https://issues.apache.org/jira/browse/HADOOP-17471?focusedWorklogId=575309=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575309 ] ASF GitHub Bot logged work on HADOOP-17471: --- Author: ASF GitHub Bot Created on: 01/Apr/21 04:33 Start Date: 01/Apr/21 04:33 Worklog Time Spent: 10m Work Description: mehakmeet commented on pull request #2731: URL: https://github.com/apache/hadoop/pull/2731#issuecomment-811630475 Extended ```AbfsCounters.java``` interface with ```DurationTrackerFactory.java```, and converted the try-with-resources DurationTracking into ``` try { IOStatisticsBinding.trackDurationOfInvocation(abfsCounters, AbfsStatistic.getStatNameFromHttpCall(method), () -> completeExecute()); } catch (IOException e) { throw new RuntimeException("Error while tracking Duration of an " + "AbfsRestOperation call", e); } ``` in the execute() method of ```AbfsRestOperation.java``` class, but I am facing this exception now: ``` java.lang.RuntimeException: Error while tracking Duration of an AbfsRestOperation call at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:192) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:767) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:749) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:295) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:786) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:506) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.tryGetFileStatus(AzureBlobFileSystem.java:1035) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:135) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3460) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:172) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3565) at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3518) at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:592) at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:604) at org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.createFileSystem(AbstractAbfsIntegrationTest.java:271) at org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.setup(AbstractAbfsIntegrationTest.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: Operation failed: "The specified filesystem does not exist.", 404, HEAD, https://abfstorageacc.dfs.core.windows.net/abfs-testcontainer-72fd54ec-fe7b-42bc-a21c-61a175de9286/?upn=false=getAccessControl=90 at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:225) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:190) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:454) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188) ... 29 more ``` Requests seem to return 404 with FileSystem not found exception. Tried to debug didn't get much info with that as well. Have you faced this issue before with DurationTrackingFactory @steveloughran ? -- This is an automated message from the Apache Git Service. To
[GitHub] [hadoop] mehakmeet commented on pull request #2731: HADOOP-17471. ABFS to collect IOStatistics
mehakmeet commented on pull request #2731: URL: https://github.com/apache/hadoop/pull/2731#issuecomment-811630475 Extended ```AbfsCounters.java``` interface with ```DurationTrackerFactory.java```, and converted the try-with-resources DurationTracking into ``` try { IOStatisticsBinding.trackDurationOfInvocation(abfsCounters, AbfsStatistic.getStatNameFromHttpCall(method), () -> completeExecute()); } catch (IOException e) { throw new RuntimeException("Error while tracking Duration of an " + "AbfsRestOperation call", e); } ``` in the execute() method of ```AbfsRestOperation.java``` class, but I am facing this exception now: ``` java.lang.RuntimeException: Error while tracking Duration of an AbfsRestOperation call at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:192) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:767) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:749) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:295) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:786) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:506) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.tryGetFileStatus(AzureBlobFileSystem.java:1035) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:135) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3460) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:172) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3565) at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3518) at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:592) at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:604) at org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.createFileSystem(AbstractAbfsIntegrationTest.java:271) at org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.setup(AbstractAbfsIntegrationTest.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: Operation failed: "The specified filesystem does not exist.", 404, HEAD, https://abfstorageacc.dfs.core.windows.net/abfs-testcontainer-72fd54ec-fe7b-42bc-a21c-61a175de9286/?upn=false=getAccessControl=90 at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:225) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:190) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:454) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188) ... 29 more ``` Requests seem to return 404 with FileSystem not found exception. Tried to debug didn't get much info with that as well. Have you faced this issue before with DurationTrackingFactory @steveloughran ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=575287=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575287 ] ASF GitHub Bot logged work on HADOOP-17609: --- Author: ASF GitHub Bot Created on: 01/Apr/21 03:22 Start Date: 01/Apr/21 03:22 Worklog Time Spent: 10m Work Description: iwasakims edited a comment on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811597921 I manually tested the fix on CentOS 8 with bcprov-ext-jdk15on-168.jar set up based on [the comment of HDFS-15098](https://issues.apache.org/jira/browse/HDFS-15098?focusedCommentId=17112893=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17112893). OpensslCipher is available but SM4 is not suppored. `hadoop key create key1 -cipher 'SM4/CTR/NoPadding'` worked (by falling back from OpensslSm4CtrCryptoCodec to JceSm4CtrCryptoCodec). ``` $ grep Bouncy /usr/lib/jvm/java-1.8.0-openjdk/jre/lib/security/java.security security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider $ bin/hadoop checknative 2>/dev/null Native library checking: hadoop: true /home/centos/dist/hadoop-3.4.0-SNAPSHOT-HADOOP-17609/lib/native/libhadoop.so.1.0.0 zlib:true /lib64/libz.so.1 zstd : true /lib64/libzstd.so.1 bzip2: true /lib64/libbz2.so.1 openssl: true /lib64/libcrypto.so ISA-L: true /lib64/libisal.so.2 PMDK:false The native code was built without PMDK support. $ bin/hadoop --daemon start kms $ bin/hadoop key create key1 -cipher 'SM4/CTR/NoPadding' 2021-04-01 02:38:10,276 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 02:38:10,288 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 02:38:10,447 DEBUG kms.KMSClientProvider: Current UGI: centos (auth:SIMPLE) 2021-04-01 02:38:10,450 DEBUG kms.KMSClientProvider: Login UGI: centos (auth:SIMPLE) key1 has been successfully created with options Options{cipher='SM4/CTR/NoPadding', bitLength=128, description='null', attributes=null}. org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@41e1e210 has been updated. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575287) Time Spent: 40m (was: 0.5h) > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims edited a comment on pull request #2847: HADOOP-17609. Make SM4 support optional for OpenSSL native code.
iwasakims edited a comment on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811597921 I manually tested the fix on CentOS 8 with bcprov-ext-jdk15on-168.jar set up based on [the comment of HDFS-15098](https://issues.apache.org/jira/browse/HDFS-15098?focusedCommentId=17112893=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17112893). OpensslCipher is available but SM4 is not suppored. `hadoop key create key1 -cipher 'SM4/CTR/NoPadding'` worked (by falling back from OpensslSm4CtrCryptoCodec to JceSm4CtrCryptoCodec). ``` $ grep Bouncy /usr/lib/jvm/java-1.8.0-openjdk/jre/lib/security/java.security security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider $ bin/hadoop checknative 2>/dev/null Native library checking: hadoop: true /home/centos/dist/hadoop-3.4.0-SNAPSHOT-HADOOP-17609/lib/native/libhadoop.so.1.0.0 zlib:true /lib64/libz.so.1 zstd : true /lib64/libzstd.so.1 bzip2: true /lib64/libbz2.so.1 openssl: true /lib64/libcrypto.so ISA-L: true /lib64/libisal.so.2 PMDK:false The native code was built without PMDK support. $ bin/hadoop --daemon start kms $ bin/hadoop key create key1 -cipher 'SM4/CTR/NoPadding' 2021-04-01 02:38:10,276 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 02:38:10,288 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 02:38:10,447 DEBUG kms.KMSClientProvider: Current UGI: centos (auth:SIMPLE) 2021-04-01 02:38:10,450 DEBUG kms.KMSClientProvider: Login UGI: centos (auth:SIMPLE) key1 has been successfully created with options Options{cipher='SM4/CTR/NoPadding', bitLength=128, description='null', attributes=null}. org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@41e1e210 has been updated. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=575283=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575283 ] ASF GitHub Bot logged work on HADOOP-17609: --- Author: ASF GitHub Bot Created on: 01/Apr/21 02:47 Start Date: 01/Apr/21 02:47 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811597921 I manually tested the fix on CentOS 8 with bcprov-ext-jdk15on-165.jar set up based on [the comment of HDFS-15098](https://issues.apache.org/jira/browse/HDFS-15098?focusedCommentId=17112893=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17112893). OpensslCipher is available but SM4 is not suppored. `hadoop key create key1 -cipher 'SM4/CTR/NoPadding'` worked (by falling back from OpensslSm4CtrCryptoCodec to JceSm4CtrCryptoCodec). ``` $ grep Bouncy /usr/lib/jvm/java-1.8.0-openjdk/jre/lib/security/java.security security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider $ bin/hadoop checknative 2>/dev/null Native library checking: hadoop: true /home/centos/dist/hadoop-3.4.0-SNAPSHOT-HADOOP-17609/lib/native/libhadoop.so.1.0.0 zlib:true /lib64/libz.so.1 zstd : true /lib64/libzstd.so.1 bzip2: true /lib64/libbz2.so.1 openssl: true /lib64/libcrypto.so ISA-L: true /lib64/libisal.so.2 PMDK:false The native code was built without PMDK support. $ bin/hadoop --daemon start kms $ bin/hadoop key create key1 -cipher 'SM4/CTR/NoPadding' 2021-04-01 02:38:10,276 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 02:38:10,288 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 02:38:10,447 DEBUG kms.KMSClientProvider: Current UGI: centos (auth:SIMPLE) 2021-04-01 02:38:10,450 DEBUG kms.KMSClientProvider: Login UGI: centos (auth:SIMPLE) key1 has been successfully created with options Options{cipher='SM4/CTR/NoPadding', bitLength=128, description='null', attributes=null}. org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@41e1e210 has been updated. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575283) Time Spent: 0.5h (was: 20m) > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2847: HADOOP-17609. Make SM4 support optional for OpenSSL native code.
iwasakims commented on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811597921 I manually tested the fix on CentOS 8 with bcprov-ext-jdk15on-165.jar set up based on [the comment of HDFS-15098](https://issues.apache.org/jira/browse/HDFS-15098?focusedCommentId=17112893=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17112893). OpensslCipher is available but SM4 is not suppored. `hadoop key create key1 -cipher 'SM4/CTR/NoPadding'` worked (by falling back from OpensslSm4CtrCryptoCodec to JceSm4CtrCryptoCodec). ``` $ grep Bouncy /usr/lib/jvm/java-1.8.0-openjdk/jre/lib/security/java.security security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider $ bin/hadoop checknative 2>/dev/null Native library checking: hadoop: true /home/centos/dist/hadoop-3.4.0-SNAPSHOT-HADOOP-17609/lib/native/libhadoop.so.1.0.0 zlib:true /lib64/libz.so.1 zstd : true /lib64/libzstd.so.1 bzip2: true /lib64/libbz2.so.1 openssl: true /lib64/libcrypto.so ISA-L: true /lib64/libisal.so.2 PMDK:false The native code was built without PMDK support. $ bin/hadoop --daemon start kms $ bin/hadoop key create key1 -cipher 'SM4/CTR/NoPadding' 2021-04-01 02:38:10,276 DEBUG kms.KMSClientProvider: KMSClientProvider created for KMS url: http://localhost:9600/kms/v1/ delegation token service: kms://http@localhost:9600/kms canonical service: 127.0.0.1:9600. 2021-04-01 02:38:10,288 DEBUG kms.LoadBalancingKMSClientProvider: Created LoadBalancingKMSClientProvider for KMS url: kms://http@localhost:9600/kms with 1 providers. delegation token service: kms://http@localhost:9600/kms, canonical service: 127.0.0.1:9600 2021-04-01 02:38:10,447 DEBUG kms.KMSClientProvider: Current UGI: centos (auth:SIMPLE) 2021-04-01 02:38:10,450 DEBUG kms.KMSClientProvider: Login UGI: centos (auth:SIMPLE) key1 has been successfully created with options Options{cipher='SM4/CTR/NoPadding', bitLength=128, description='null', attributes=null}. org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@41e1e210 has been updated. ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17619) Fix DelegationTokenRenewer#updateRenewalTime java doc error.
[ https://issues.apache.org/jira/browse/HADOOP-17619?focusedWorklogId=575281=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575281 ] ASF GitHub Bot logged work on HADOOP-17619: --- Author: ASF GitHub Bot Created on: 01/Apr/21 02:38 Start Date: 01/Apr/21 02:38 Worklog Time Spent: 10m Work Description: qizhu-lucas commented on pull request #2846: URL: https://github.com/apache/hadoop/pull/2846#issuecomment-811594958 @aajisaka @ayushtkn Could you help review this? It's a error java doc for updateRenewalTime. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575281) Time Spent: 0.5h (was: 20m) > Fix DelegationTokenRenewer#updateRenewalTime java doc error. > > > Key: HADOOP-17619 > URL: https://issues.apache.org/jira/browse/HADOOP-17619 > Project: Hadoop Common > Issue Type: Bug >Reporter: Qi Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > The param of updateRenewalTime should be the renew cycle, not the new time. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] qizhu-lucas commented on pull request #2846: HADOOP-17619: Fix DelegationTokenRenewer#updateRenewalTime java doc e…
qizhu-lucas commented on pull request #2846: URL: https://github.com/apache/hadoop/pull/2846#issuecomment-811594958 @aajisaka @ayushtkn Could you help review this? It's a error java doc for updateRenewalTime. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2784: HDFS-15850. Superuser actions should be reported to external enforcers
hadoop-yetus commented on pull request #2784: URL: https://github.com/apache/hadoop/pull/2784#issuecomment-811566687 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 57s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 48s | | trunk passed | | +1 :green_heart: | compile | 5m 9s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 46s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 0s | | trunk passed | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 34s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 45s | | the patch passed | | +1 :green_heart: | compile | 5m 12s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 12s | | the patch passed | | +1 :green_heart: | compile | 4m 39s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 11s | | hadoop-hdfs-project: The patch generated 0 new + 498 unchanged - 6 fixed = 498 total (was 504) | | +1 :green_heart: | mvnsite | 1m 46s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 3s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 348m 20s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2784/12/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 26m 8s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2784/12/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 491m 59s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.federation.router.TestRouterFederationRename | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base:
[GitHub] [hadoop] hadoop-yetus commented on pull request #2844: HDFS-15940 : Fixing and refactoring tests specific to Block recovery
hadoop-yetus commented on pull request #2844: URL: https://github.com/apache/hadoop/pull/2844#issuecomment-811555076 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 32s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/2/artifact/out/blanks-eol.txt) | The patch has 8 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 51s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 55 unchanged - 7 fixed = 65 total (was 62) | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 44s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 394m 56s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 480m 27s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme | | | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base:
[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312744#comment-17312744 ] Wei-Chiu Chuang commented on HADOOP-15327: -- Here's my WIP patch for this migration: https://github.com/jojochuang/hadoop/commits/shuffle_handler_netty4 The TestShuffleHandler is having a few failures. The testMaxConnections UT failure is apparently caused by either a bad test or a behavior change in netty. The test expects connections to be received sequentially but connections are actually received in parallel. No order can be assumed. Other UTs in TestShuffleHandler are caused by socket timeouts. The clients receive the header of the response from server, but not the content. I suspected it was caused by Channel.write() API where it doesn't automatically reflush after write in netty4. I switched to writeAndFlush() but problems still persist. > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Wei-Chiu Chuang >Priority: Major > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=575163=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575163 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 31/Mar/21 21:43 Start Date: 31/Mar/21 21:43 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-811485602 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 42 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 4s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 25s | | trunk passed | | +1 :green_heart: | compile | 20m 35s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 49s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 24s | | trunk passed | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 26s | | the patch passed | | +1 :green_heart: | compile | 20m 1s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 20m 1s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 2034 unchanged - 1 fixed = 2036 total (was 2035) | | +1 :green_heart: | compile | 18m 0s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 18m 0s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1929 unchanged - 1 fixed = 1930 total (was 1930) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 46s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 41 new + 185 unchanged - 4 fixed = 226 total (was 189) | | +1 :green_heart: | mvnsite | 2m 28s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 40s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javadoc | 0m 44s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88) | | -1 :x: | spotbugs | 1m 34s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector
hadoop-yetus commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-811485602 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 42 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 4s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 25s | | trunk passed | | +1 :green_heart: | compile | 20m 35s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 49s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 24s | | trunk passed | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 26s | | the patch passed | | +1 :green_heart: | compile | 20m 1s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 20m 1s | [/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 2034 unchanged - 1 fixed = 2036 total (was 2035) | | +1 :green_heart: | compile | 18m 0s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 18m 0s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1929 unchanged - 1 fixed = 1930 total (was 1930) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 46s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 41 new + 185 unchanged - 4 fixed = 226 total (was 189) | | +1 :green_heart: | mvnsite | 2m 28s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 40s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javadoc | 0m 44s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 80 unchanged - 8 fixed = 86 total (was 88) | | -1 :x: | spotbugs | 1m 34s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/9/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) | hadoop-tools/hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | | +1 :green_heart: | shadedclient | 14m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 26s | | hadoop-common in the patch passed. | | -1 :x: | unit | 2m 32s |
[jira] [Commented] (HADOOP-17222) Create socket address leveraging URI cache
[ https://issues.apache.org/jira/browse/HADOOP-17222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312699#comment-17312699 ] Mingliang Liu commented on HADOOP-17222: [~sodonnell] I was oncall recently so did not have time to review that in time. I just checked and the backport looks great. Thanks! > Create socket address leveraging URI cache > --- > > Key: HADOOP-17222 > URL: https://issues.apache.org/jira/browse/HADOOP-17222 > Project: Hadoop Common > Issue Type: Improvement > Components: common, hdfs-client > Environment: HBase version: 2.1.0 > JVM: -Xmx2g -Xms2g > hadoop hdfs version: 2.7.4 > disk:SSD > OS:CentOS Linux release 7.4.1708 (Core) > JMH Benchmark: @Fork(value = 1) > @Warmup(iterations = 300) > @Measurement(iterations = 300) >Reporter: fanrui >Assignee: fanrui >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: After Optimization remark.png, After optimization.svg, > Before Optimization remark.png, Before optimization.svg > > Time Spent: 5h 50m > Remaining Estimate: 0h > > Note:Not only the hdfs client can get the current benefit, all callers of > NetUtils.createSocketAddr will get the benefit. Just use hdfs client as an > example. > > Hdfs client selects best DN for hdfs Block. method call stack: > DFSInputStream.chooseDataNode -> getBestNodeDNAddrPair -> > NetUtils.createSocketAddr > NetUtils.createSocketAddr creates the corresponding InetSocketAddress based > on the host and port. There are some heavier operations in the > NetUtils.createSocketAddr method, for example: URI.create(target), so > NetUtils.createSocketAddr takes more time to execute. > The following is my performance report. The report is based on HBase calling > hdfs. HBase is a high-frequency access client for hdfs, because HBase read > operations often access a small DataBlock (about 64k) instead of the entire > HFile. In the case of high frequency access, the NetUtils.createSocketAddr > method is time-consuming. > h3. Test Environment: > > {code:java} > HBase version: 2.1.0 > JVM: -Xmx2g -Xms2g > hadoop hdfs version: 2.7.4 > disk:SSD > OS:CentOS Linux release 7.4.1708 (Core) > JMH Benchmark: @Fork(value = 1) > @Warmup(iterations = 300) > @Measurement(iterations = 300) > {code} > h4. Before Optimization FlameGraph: > In the figure, we can see that DFSInputStream.getBestNodeDNAddrPair accounts > for 4.86% of the entire CPU, and the creation of URIs accounts for a larger > proportion. > !Before Optimization remark.png! > h3. Optimization ideas: > NetUtils.createSocketAddr creates InetSocketAddress based on host and port. > Here we can add Cache to InetSocketAddress. The key of Cache is host and > port, and the value is InetSocketAddress. > h4. After Optimization FlameGraph: > In the figure, we can see that DFSInputStream.getBestNodeDNAddrPair accounts > for 0.54% of the entire CPU. Here, ConcurrentHashMap is used as the Cache, > and the ConcurrentHashMap.get() method gets data from the Cache. The CPU > usage of DFSInputStream.getBestNodeDNAddrPair has been optimized from 4.86% > to 0.54%. > !After Optimization remark.png! > h3. Original FlameGraph link: > [Before > Optimization|https://drive.google.com/file/d/133L5m75u2tu_KgKfGHZLEUzGR0XAfUl6/view?usp=sharing] > [After Optimization > FlameGraph|https://drive.google.com/file/d/133L5m75u2tu_KgKfGHZLEUzGR0XAfUl6/view?usp=sharing] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saintstack commented on pull request #2844: HDFS-15940 : Fixing and refactoring tests specific to Block recovery
saintstack commented on pull request #2844: URL: https://github.com/apache/hadoop/pull/2844#issuecomment-811449530 There is a new build running. Lets see what set it produces. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312667#comment-17312667 ] Akira Ajisaka edited comment on HADOOP-17608 at 3/31/21, 7:30 PM: -- The PR does not fix the test, reverted. After HADOOP-16524, the tests always fail because of the reload thread name has been changed. was (Author: ajisakaa): After HADOOP-16524, the test always fails because of the reload thread name has been changed. > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312667#comment-17312667 ] Akira Ajisaka edited comment on HADOOP-17608 at 3/31/21, 7:30 PM: -- The PR does not fix the test, reverted. After HADOOP-16524, the tests always fail because the reload thread name has been changed. was (Author: ajisakaa): The PR does not fix the test, reverted. After HADOOP-16524, the tests always fail because of the reload thread name has been changed. > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312667#comment-17312667 ] Akira Ajisaka commented on HADOOP-17608: After HADOOP-16524, the test always fails because of the reload thread name has been changed. > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17608: --- Fix Version/s: (was: 3.4.0) Target Version/s: 3.3.1, 3.4.0 > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reopened HADOOP-17608: > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=575077=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575077 ] ASF GitHub Bot logged work on HADOOP-17608: --- Author: ASF GitHub Bot Created on: 31/Mar/21 19:28 Start Date: 31/Mar/21 19:28 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#issuecomment-811375988 Reverted. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575077) Time Spent: 2h (was: 1h 50m) > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2828: HADOOP-17608. Fix NPE in TestKMS
aajisaka commented on pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#issuecomment-811375988 Reverted. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=575075=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575075 ] ASF GitHub Bot logged work on HADOOP-17608: --- Author: ASF GitHub Bot Created on: 31/Mar/21 19:25 Start Date: 31/Mar/21 19:25 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#issuecomment-811372711 Let me revert this. HADOOP-16524 changed the reloader thread name and this PR does not fix the test. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575075) Time Spent: 1h 50m (was: 1h 40m) > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2828: HADOOP-17608. Fix NPE in TestKMS
aajisaka commented on pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#issuecomment-811372711 Let me revert this. HADOOP-16524 changed the reloader thread name and this PR does not fix the test. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2844: HDFS-15940 : Fixing and refactoring tests specific to Block recovery
virajjasani commented on pull request #2844: URL: https://github.com/apache/hadoop/pull/2844#issuecomment-811353656 > LGTM... mostly just checkstyle fixes. +1 on breaking up the test and naming it properly. Why all the failures? Thanks for the review @saintstack . The failures are flakies or some recent permanent ones. I have seen almost similar no of failures on many of recent PRs. Trying to spend some time when I can to fix them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17511: Target Version/s: 3.4.0 (was: 3.3.1) > Add an Audit plugin point for S3A auditing/context > -- > > Key: HADOOP-17511 > URL: https://issues.apache.org/jira/browse/HADOOP-17511 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 13h 10m > Remaining Estimate: 0h > > Add a way for auditing tools to correlate S3 object calls with Hadoop FS API > calls. > Initially just to log/forward to an auditing service. > Later: let us attach them as parameters in S3 requests, such as opentrace > headeers or (my initial idea: http referrer header -where it will get into > the log) > Challenges > * ensuring the audit span is created for every public entry point. That will > have to include those used in s3guard tools, some defacto public APIs > * and not re-entered for active spans. s3A code must not call back into the > FS API points > * Propagation across worker threads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16524) Automatic keystore reloading for HttpServer2
[ https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HADOOP-16524. Resolution: Fixed Resolving again. Thanks for the feature [~borislav.iordanov] contrib. > Automatic keystore reloading for HttpServer2 > > > Key: HADOOP-16524 > URL: https://issues.apache.org/jira/browse/HADOOP-16524 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Borislav Iordanov >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-16524.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > Jetty 9 simplified reloading of keystore. This allows hadoop daemon's SSL > cert to be updated in place without having to restart the service. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16524) Automatic keystore reloading for HttpServer2
[ https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312594#comment-17312594 ] Michael Stack commented on HADOOP-16524: Pushed new PR that fixes yarn issue to branch-3.3 and trunk (took a few attempts for me to get the commit message format right). Ran the PR a few times and got different flakies each time through: none seemed related. Please shout if we broke anything. > Automatic keystore reloading for HttpServer2 > > > Key: HADOOP-16524 > URL: https://issues.apache.org/jira/browse/HADOOP-16524 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Borislav Iordanov >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-16524.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > Jetty 9 simplified reloading of keystore. This allows hadoop daemon's SSL > cert to be updated in place without having to restart the service. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=575023=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575023 ] ASF GitHub Bot logged work on HADOOP-17609: --- Author: ASF GitHub Bot Created on: 31/Mar/21 18:05 Start Date: 31/Mar/21 18:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811296822 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 57s | | trunk passed | | +1 :green_heart: | compile | 22m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 48s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 20m 8s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 20m 8s | [/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2847/1/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 28 new + 329 unchanged - 28 fixed = 357 total (was 357) | | +1 :green_heart: | golang | 20m 8s | | the patch passed | | +1 :green_heart: | javac | 20m 8s | | the patch passed | | +1 :green_heart: | compile | 18m 0s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | cc | 18m 0s | [/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2847/1/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 42 new + 315 unchanged - 42 fixed = 357 total (was 357) | | +1 :green_heart: | golang | 18m 0s | | the patch passed | | +1 :green_heart: | javac | 18m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 4s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 27s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 179m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2847/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2847 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle
[GitHub] [hadoop] hadoop-yetus commented on pull request #2847: HADOOP-17609. Make SM4 support optional for OpenSSL native code.
hadoop-yetus commented on pull request #2847: URL: https://github.com/apache/hadoop/pull/2847#issuecomment-811296822 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 57s | | trunk passed | | +1 :green_heart: | compile | 22m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 48s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 20m 8s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 20m 8s | [/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2847/1/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 28 new + 329 unchanged - 28 fixed = 357 total (was 357) | | +1 :green_heart: | golang | 20m 8s | | the patch passed | | +1 :green_heart: | javac | 20m 8s | | the patch passed | | +1 :green_heart: | compile | 18m 0s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | cc | 18m 0s | [/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2847/1/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 42 new + 315 unchanged - 42 fixed = 357 total (was 357) | | +1 :green_heart: | golang | 18m 0s | | the patch passed | | +1 :green_heart: | javac | 18m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 4s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 27s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 179m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2847/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2847 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell cc golang | | uname | Linux f3983e6f5cbd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4bb500135c12382da8f9060398f3ca64208c6346 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions |
[jira] [Work logged] (HADOOP-17619) Fix DelegationTokenRenewer#updateRenewalTime java doc error.
[ https://issues.apache.org/jira/browse/HADOOP-17619?focusedWorklogId=575007=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575007 ] ASF GitHub Bot logged work on HADOOP-17619: --- Author: ASF GitHub Bot Created on: 31/Mar/21 17:45 Start Date: 31/Mar/21 17:45 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2846: URL: https://github.com/apache/hadoop/pull/2846#issuecomment-811284097 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 53s | | trunk passed | | +1 :green_heart: | compile | 20m 31s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 52s | | the patch passed | | +1 :green_heart: | compile | 19m 51s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 19m 51s | | the patch passed | | +1 :green_heart: | compile | 18m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 5s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 30s | | the patch passed | | +1 :green_heart: | javadoc | 1m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 33s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 28s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 12s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 175m 39s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2846/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2846 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1c5b54a35b5b 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c806eec7a8076d80de8c0fb16303e4fae4d3f4bc | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2846/1/testReport/ | | Max. process+thread count | 2135 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2846/1/console | |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2846: HADOOP-17619: Fix DelegationTokenRenewer#updateRenewalTime java doc e…
hadoop-yetus commented on pull request #2846: URL: https://github.com/apache/hadoop/pull/2846#issuecomment-811284097 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 53s | | trunk passed | | +1 :green_heart: | compile | 20m 31s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 52s | | the patch passed | | +1 :green_heart: | compile | 19m 51s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 19m 51s | | the patch passed | | +1 :green_heart: | compile | 18m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 5s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 30s | | the patch passed | | +1 :green_heart: | javadoc | 1m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 33s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 28s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 12s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 175m 39s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2846/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2846 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1c5b54a35b5b 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c806eec7a8076d80de8c0fb16303e4fae4d3f4bc | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2846/1/testReport/ | | Max. process+thread count | 2135 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2846/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at:
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=575000=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575000 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 31/Mar/21 17:35 Start Date: 31/Mar/21 17:35 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-811277712 I'm going to do a squash of the PR and push up, as yetus has completely given up trying to build this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 575000) Time Spent: 13h 10m (was: 13h) > Add an Audit plugin point for S3A auditing/context > -- > > Key: HADOOP-17511 > URL: https://issues.apache.org/jira/browse/HADOOP-17511 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 13h 10m > Remaining Estimate: 0h > > Add a way for auditing tools to correlate S3 object calls with Hadoop FS API > calls. > Initially just to log/forward to an auditing service. > Later: let us attach them as parameters in S3 requests, such as opentrace > headeers or (my initial idea: http referrer header -where it will get into > the log) > Challenges > * ensuring the audit span is created for every public entry point. That will > have to include those used in s3guard tools, some defacto public APIs > * and not re-entered for active spans. s3A code must not call back into the > FS API points > * Propagation across worker threads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector
steveloughran commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-811277712 I'm going to do a squash of the PR and push up, as yetus has completely given up trying to build this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception
steveloughran commented on a change in pull request #2775: URL: https://github.com/apache/hadoop/pull/2775#discussion_r605091729 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java ## @@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data) return SecureShuffleUtils.hashFromString(data, key); } + private class PingSocketCleaner extends Thread { +PingSocketCleaner(String name) { + super(name); +} + +@Override +public void run() { + LOG.info("PingSocketCleaner started..."); + while (true) { +Socket clientSocket = null; +try { + clientSocket = serverSocket.accept(); + if (LOG.isDebugEnabled()) { +LOG.debug("Got one client socket..."); + } + int readData = clientSocket.getInputStream().read(); Review comment: ...and presumably a finally() round the read and use IOUtils.closeQuietly() for clientsocket so the case where it won't close because of some inner problem (such as it being closed:) don't trigger failures -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saintstack merged pull request #2693: Hadoop 16524 - resubmission following some unit test fixes
saintstack merged pull request #2693: URL: https://github.com/apache/hadoop/pull/2693 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-17608: Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~aajisaka] for the contribution. PR has been merged. > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=574989=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574989 ] ASF GitHub Bot logged work on HADOOP-17608: --- Author: ASF GitHub Bot Created on: 31/Mar/21 16:57 Start Date: 31/Mar/21 16:57 Worklog Time Spent: 10m Work Description: xiaoyuyao commented on pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#issuecomment-811251551 Thanks @aajisaka for the update. +1, I will merge it shortly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574989) Time Spent: 1.5h (was: 1h 20m) > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=574990=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574990 ] ASF GitHub Bot logged work on HADOOP-17608: --- Author: ASF GitHub Bot Created on: 31/Mar/21 16:57 Start Date: 31/Mar/21 16:57 Worklog Time Spent: 10m Work Description: xiaoyuyao merged pull request #2828: URL: https://github.com/apache/hadoop/pull/2828 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574990) Time Spent: 1h 40m (was: 1.5h) > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao merged pull request #2828: HADOOP-17608. Fix NPE in TestKMS
xiaoyuyao merged pull request #2828: URL: https://github.com/apache/hadoop/pull/2828 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on pull request #2828: HADOOP-17608. Fix NPE in TestKMS
xiaoyuyao commented on pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#issuecomment-811251551 Thanks @aajisaka for the update. +1, I will merge it shortly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2549: Hadoop 17428. ABFS: Implementation for getContentSummary
hadoop-yetus commented on pull request #2549: URL: https://github.com/apache/hadoop/pull/2549#issuecomment-811247660 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 37s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 1s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 14m 19s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 58s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 51s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 73m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/26/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2549 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 58b146d8d32b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9b2723b6c347ebc9450e36e1137464a7b1a704bd | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/26/testReport/ | | Max. process+thread count | 536 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/26/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please
[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=574985=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574985 ] ASF GitHub Bot logged work on HADOOP-17618: --- Author: ASF GitHub Bot Created on: 31/Mar/21 16:46 Start Date: 31/Mar/21 16:46 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-811242106 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 1s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 10s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 44s | | trunk passed | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 7s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 0s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 75m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2845 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 38e03d0eae61 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8e3ea6ef829b4d3391b986d1de118907369adcff | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/3/testReport/ | | Max. process+thread count | 735 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was
[GitHub] [hadoop] hadoop-yetus commented on pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs
hadoop-yetus commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-811242106 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 1s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 10s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 44s | | trunk passed | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 7s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 0s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 75m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2845 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 38e03d0eae61 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8e3ea6ef829b4d3391b986d1de118907369adcff | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/3/testReport/ | | Max. process+thread count | 735 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[GitHub] [hadoop] saintstack edited a comment on pull request #2693: Hadoop 16524 - resubmission following some unit test fixes
saintstack edited a comment on pull request #2693: URL: https://github.com/apache/hadoop/pull/2693#issuecomment-811235899 Failures this time through are the below, new failures. ``` org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAsyncScheduling.testAsyncSchedulerSkipNoRunningNMs org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerOvercommit.testKillMultipleContainers ``` Let me merge. Each build has new flakies fail. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saintstack commented on pull request #2693: Hadoop 16524 - resubmission following some unit test fixes
saintstack commented on pull request #2693: URL: https://github.com/apache/hadoop/pull/2693#issuecomment-811235899 Failures this time through are the below, new failures. ``` org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAsyncScheduling.testAsyncSchedulerSkipNoRunningNMs org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerOvercommit.testKillMultipleContainers ``` Let me merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2844: HDFS-15940 : Fixing and refactoring tests specific to Block recovery
virajjasani commented on pull request #2844: URL: https://github.com/apache/hadoop/pull/2844#issuecomment-811223465 @liuml07 @ayushtkn @tasanuma TestBlockRecovery is consistently failing, the purpose of this PR is to fix it and reduce noise from QA bot if you would like to take a look. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=574937=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574937 ] ASF GitHub Bot logged work on HADOOP-17618: --- Author: ASF GitHub Bot Created on: 31/Mar/21 16:06 Start Date: 31/Mar/21 16:06 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-811176186 this security/privacy issues, or just due diligence? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574937) Time Spent: 40m (was: 0.5h) > ABFS: Partially obfuscate SAS object IDs in Logs > > > Key: HADOOP-17618 > URL: https://issues.apache.org/jira/browse/HADOOP-17618 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Delegation SAS tokens are created using various parameters for specifying > details such as permissions and validity. The requests are logged, along with > values of all the query parameters. This change will partially mask values > logged for the following object IDs representing the security principal: > skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs
steveloughran commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-811176186 this security/privacy issues, or just due diligence? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2844: HDFS-15940 : Fixing and refactoring tests specific to Block recovery
hadoop-yetus commented on pull request #2844: URL: https://github.com/apache/hadoop/pull/2844#issuecomment-811171892 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 1s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 38s | | trunk passed | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/1/artifact/out/blanks-eol.txt) | The patch has 37 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 52s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 9 new + 56 unchanged - 6 fixed = 65 total (was 62) | | +1 :green_heart: | mvnsite | 1m 17s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 401m 7s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 488m 32s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2844/1/artifact/out/Dockerfile |
[jira] [Work logged] (HADOOP-17576) ABFS: Disable throttling update for auth failures
[ https://issues.apache.org/jira/browse/HADOOP-17576?focusedWorklogId=574919=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574919 ] ASF GitHub Bot logged work on HADOOP-17576: --- Author: ASF GitHub Bot Created on: 31/Mar/21 15:44 Start Date: 31/Mar/21 15:44 Worklog Time Spent: 10m Work Description: sumangala-patki commented on a change in pull request #2761: URL: https://github.com/apache/hadoop/pull/2761#discussion_r605001873 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java ## @@ -114,4 +117,32 @@ public void testWithDifferentCustomTokenFetchRetry(int numOfRetries) throws Exce + ") done, does not match with fs.azure.custom.token.fetch.retry.count configured (" + numOfRetries + ")", RetryTestTokenProvider.reTryCount == numOfRetries); } + + @Test + public void testAuthFailException() throws Exception { +Configuration config = new Configuration(getRawConfiguration()); +String accountName = config.get("fs.azure.abfs.account.name"); +// Setup to configure custom token provider +config.set("fs.azure.account.auth.type." + accountName, "Custom"); Review comment: Replaced with constants -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574919) Time Spent: 1h 50m (was: 1h 40m) > ABFS: Disable throttling update for auth failures > - > > Key: HADOOP-17576 > URL: https://issues.apache.org/jira/browse/HADOOP-17576 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > Throttling metrics are updated post the execution of each request. Failures > related to fetching access tokens and signing requests do not occur at the > Store. Hence, such operations should not contribute to the measured Store > failures, and are therefore excluded from the metric update for throttling. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sumangala-patki commented on a change in pull request #2761: HADOOP-17576. ABFS: Disable throttling update for auth failures
sumangala-patki commented on a change in pull request #2761: URL: https://github.com/apache/hadoop/pull/2761#discussion_r605001873 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java ## @@ -114,4 +117,32 @@ public void testWithDifferentCustomTokenFetchRetry(int numOfRetries) throws Exce + ") done, does not match with fs.azure.custom.token.fetch.retry.count configured (" + numOfRetries + ")", RetryTestTokenProvider.reTryCount == numOfRetries); } + + @Test + public void testAuthFailException() throws Exception { +Configuration config = new Configuration(getRawConfiguration()); +String accountName = config.get("fs.azure.abfs.account.name"); +// Setup to configure custom token provider +config.set("fs.azure.account.auth.type." + accountName, "Custom"); Review comment: Replaced with constants -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumangala Patki updated HADOOP-17618: - Description: Delegation SAS tokens are created using various parameters for specifying details such as permissions and validity. The requests are logged, along with values of all the query parameters. This change will partially mask values logged for the following object IDs representing the security principal: skoid, saoid, suoid (was: Delegation SAS tokens are created using various parameters for specifying details such as permissions and validity. The requests are logged, along with values of all the query parameters. This change will partially mask the values of the following object IDs representing the security principal: skoid, saoid, suoid) > ABFS: Partially obfuscate SAS object IDs in Logs > > > Key: HADOOP-17618 > URL: https://issues.apache.org/jira/browse/HADOOP-17618 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Delegation SAS tokens are created using various parameters for specifying > details such as permissions and validity. The requests are logged, along with > values of all the query parameters. This change will partially mask values > logged for the following object IDs representing the security principal: > skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2838: HDFS-15937. Reduce memory used during datanode layout upgrade
hadoop-yetus commented on pull request #2838: URL: https://github.com/apache/hadoop/pull/2838#issuecomment-811158199 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 44s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 32s | | trunk passed | | +1 :green_heart: | compile | 1m 19s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | | trunk passed | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 3s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 3s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 30 unchanged - 3 fixed = 30 total (was 33) | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 230m 1s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2838/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 315m 2s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2838/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2838 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux e7d77a5eb290 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 559216c798b02b2b2803be64e978d263368e55ef | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2838/3/testReport/ | | Max.
[jira] [Updated] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-17609: -- Status: Patch Available (was: Open) > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17536) Suport for customer provided encrption key
[ https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=574904=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574904 ] ASF GitHub Bot logged work on HADOOP-17536: --- Author: ASF GitHub Bot Created on: 31/Mar/21 15:19 Start Date: 31/Mar/21 15:19 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#issuecomment-811141758 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 26s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 3s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 0s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 16m 20s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 4 new + 9 unchanged - 0 fixed = 13 total (was 9) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 2s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 81m 31s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2707 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 8670aec7bb04 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2cd36d7aaee8354f70468e6ae830b4c294ced0fa | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key
hadoop-yetus commented on pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#issuecomment-811141758 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 26s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 3s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 0s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 16m 20s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 4 new + 9 unchanged - 0 fixed = 13 total (was 9) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 2s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 81m 31s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2707 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 8670aec7bb04 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2cd36d7aaee8354f70468e6ae830b4c294ced0fa | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/testReport/ | | Max. process+thread count | 566 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2707/10/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[jira] [Commented] (HADOOP-17612) Bump default Zookeeper version to 3.7.0
[ https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312449#comment-17312449 ] Enrico Olivelli commented on HADOOP-17612: -- My suggestion is to move to ZooKeeper 3.6 and to Curator 5.1. We are going to cut a release of ZooKeeper 3.6.3 and of Curator 5.1.1, probably you can wait until those releases. I guess it will happen in a couple of weeks > Bump default Zookeeper version to 3.7.0 > --- > > Key: HADOOP-17612 > URL: https://issues.apache.org/jira/browse/HADOOP-17612 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > We can bump Zookeeper version to 3.7.0 for trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312447#comment-17312447 ] Masatake Iwasaki commented on HADOOP-17609: --- SM4 is intentionally disabled in openssl-1.1.1 of CentOS. https://git.centos.org/rpms/openssl/blob/3dfed0dc2b196e3d2f958d4951348f41b6cea64b/f/SPECS/openssl.spec#_280 {noformat} # ia64, x86_64, ppc are OK by default # Configure the build tree. Override OpenSSL defaults with known-good defaults # usable on all platforms. The Configure script already knows to use -fPIC and # RPM_OPT_FLAGS, so we can skip specifiying them here. ./Configure \ --prefix=%{_prefix} --openssldir=%{_sysconfdir}/pki/tls ${sslflags} \ --system-ciphers-file=%{_sysconfdir}/crypto-policies/back-ends/openssl.config \ zlib enable-camellia enable-seed enable-rfc3779 enable-sctp \ enable-cms enable-md2 enable-rc5\ enable-weak-ssl-ciphers \ no-mdc2 no-ec2m no-sm2 no-sm4 \ shared ${sslarch} $RPM_OPT_FLAGS '-DDEVRANDOM="\"/dev/urandom\""' {noformat} > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=574898=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574898 ] ASF GitHub Bot logged work on HADOOP-17609: --- Author: ASF GitHub Bot Created on: 31/Mar/21 15:05 Start Date: 31/Mar/21 15:05 Worklog Time Spent: 10m Work Description: iwasakims opened a new pull request #2847: URL: https://github.com/apache/hadoop/pull/2847 https://issues.apache.org/jira/browse/HADOOP-17609 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574898) Remaining Estimate: 0h Time Spent: 10m > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17609: Labels: pull-request-available (was: ) > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims opened a new pull request #2847: HADOOP-17609. Make SM4 support optional for OpenSSL native code.
iwasakims opened a new pull request #2847: URL: https://github.com/apache/hadoop/pull/2847 https://issues.apache.org/jira/browse/HADOOP-17609 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=574896=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574896 ] ASF GitHub Bot logged work on HADOOP-17618: --- Author: ASF GitHub Bot Created on: 31/Mar/21 15:04 Start Date: 31/Mar/21 15:04 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-811134933 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 8s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | spotbugs | 1m 2s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/2/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 14m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 56s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 79m 13s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.SAS_OID_PARAM_KEYS should be package protected At AbfsHttpOperation.java: At AbfsHttpOperation.java:[line 55] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2845 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 99c675fb5a63 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dc641124c903e8650f8512f91ed0b31636b78012 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
[GitHub] [hadoop] hadoop-yetus commented on pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs
hadoop-yetus commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-811134933 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 8s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | spotbugs | 1m 2s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/2/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 14m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 56s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 79m 13s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.SAS_OID_PARAM_KEYS should be package protected At AbfsHttpOperation.java: At AbfsHttpOperation.java:[line 55] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2845 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 99c675fb5a63 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dc641124c903e8650f8512f91ed0b31636b78012 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/2/testReport/ | | Max. process+thread count | 678 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by
[jira] [Work logged] (HADOOP-17619) Fix DelegationTokenRenewer#updateRenewalTime java doc error.
[ https://issues.apache.org/jira/browse/HADOOP-17619?focusedWorklogId=574888=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574888 ] ASF GitHub Bot logged work on HADOOP-17619: --- Author: ASF GitHub Bot Created on: 31/Mar/21 14:48 Start Date: 31/Mar/21 14:48 Worklog Time Spent: 10m Work Description: qizhu-lucas opened a new pull request #2846: URL: https://github.com/apache/hadoop/pull/2846 …rror. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574888) Remaining Estimate: 0h Time Spent: 10m > Fix DelegationTokenRenewer#updateRenewalTime java doc error. > > > Key: HADOOP-17619 > URL: https://issues.apache.org/jira/browse/HADOOP-17619 > Project: Hadoop Common > Issue Type: Bug >Reporter: Qi Zhu >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > The param of updateRenewalTime should be the renew cycle, not the new time. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17619) Fix DelegationTokenRenewer#updateRenewalTime java doc error.
[ https://issues.apache.org/jira/browse/HADOOP-17619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17619: Labels: pull-request-available (was: ) > Fix DelegationTokenRenewer#updateRenewalTime java doc error. > > > Key: HADOOP-17619 > URL: https://issues.apache.org/jira/browse/HADOOP-17619 > Project: Hadoop Common > Issue Type: Bug >Reporter: Qi Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The param of updateRenewalTime should be the renew cycle, not the new time. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] qizhu-lucas opened a new pull request #2846: HADOOP-17619: Fix DelegationTokenRenewer#updateRenewalTime java doc e…
qizhu-lucas opened a new pull request #2846: URL: https://github.com/apache/hadoop/pull/2846 …rror. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17617) Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file
[ https://issues.apache.org/jira/browse/HADOOP-17617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312434#comment-17312434 ] Hadoop QA commented on HADOOP-17617: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 25s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 49s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 38m 35s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/177/artifact/out/whitespace-eol.txt{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 58s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green}{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 40s{color} | {color:black}{color} | {color:black}{color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/177/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17617 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13023246/HADOOP-17617.001.patch | | Optional Tests | dupname asflicense mvnsite | | uname | Linux 4de7033b6606 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ff6ec20d845 | | Max. process+thread count | 516 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/177/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. > Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm > file > > > Key: HADOOP-17617 > URL: https://issues.apache.org/jira/browse/HADOOP-17617 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17617.001.patch > > > Format of RESPONSE of Get Key Versions in KMS index.md.vm is incorrect > https://hadoop.apache.org/docs/r3.1.1/hadoop-kms/index.html#Get_Key_Versions -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17612) Bump default Zookeeper version to 3.7.0
[ https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312297#comment-17312297 ] Viraj Jasani edited comment on HADOOP-17612 at 3/31/21, 2:43 PM: - Ah I see, bumping Zookeeper to 3.7 might not be doable sooner than what I expected because we use curator and so far, curator supports 3.6 as max version of Zookeeper. There are some ZookeeperServer level refactoring done in 3.7 release due to which curator too needs some minor code changes in order to support 3.7 version. We might have to hold on to this until we have new curator release that supports 3.7. Edit: The best we can do as of this point is bump up Zookeeper to 3.6 and curator to the appropriate version that uses 3.6. was (Author: vjasani): Ah I see, bumping Zookeeper to 3.7 might not be doable sooner than what I expected because we use curator and so far, curator supports 3.6 as max version of Zookeeper. There are some ZookeeperServer level refactoring done in 3.7 release due to which curator too needs some minor code changes in order to support 3.7 version. We might have to hold on to this until we have new curator release that supports 3.7. > Bump default Zookeeper version to 3.7.0 > --- > > Key: HADOOP-17612 > URL: https://issues.apache.org/jira/browse/HADOOP-17612 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > We can bump Zookeeper version to 3.7.0 for trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=574886=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574886 ] ASF GitHub Bot logged work on HADOOP-17618: --- Author: ASF GitHub Bot Created on: 31/Mar/21 14:41 Start Date: 31/Mar/21 14:41 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-89174 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 22m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 52s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | | trunk passed | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | spotbugs | 1m 7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/1/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 14m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 54s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 98m 47s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.SAS_OID_PARAM_KEYS should be package protected At AbfsHttpOperation.java: At AbfsHttpOperation.java:[line 55] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2845 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 43b865b6a2ef 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e0b34b31c682c24cd049910e0c4e8ba63213e4ea | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs
hadoop-yetus commented on pull request #2845: URL: https://github.com/apache/hadoop/pull/2845#issuecomment-89174 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 22m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 52s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | | trunk passed | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | spotbugs | 1m 7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/1/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 14m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 54s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 98m 47s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | org.apache.hadoop.fs.azurebfs.services.AbfsHttpOperation.SAS_OID_PARAM_KEYS should be package protected At AbfsHttpOperation.java: At AbfsHttpOperation.java:[line 55] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2845 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 43b865b6a2ef 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e0b34b31c682c24cd049910e0c4e8ba63213e4ea | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2845/1/testReport/ | | Max. process+thread count | 717 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output |
[jira] [Updated] (HADOOP-17619) Fix DelegationTokenRenewer#updateRenewalTime java doc error.
[ https://issues.apache.org/jira/browse/HADOOP-17619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qi Zhu updated HADOOP-17619: Description: The param of updateRenewalTime should be the renew cycle, not the new time. (was: The param of updateRenewalTime should be the renew cycle.) > Fix DelegationTokenRenewer#updateRenewalTime java doc error. > > > Key: HADOOP-17619 > URL: https://issues.apache.org/jira/browse/HADOOP-17619 > Project: Hadoop Common > Issue Type: Bug >Reporter: Qi Zhu >Priority: Minor > > The param of updateRenewalTime should be the renew cycle, not the new time. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17619) Fix DelegationTokenRenewer#updateRenewalTime java doc error.
Qi Zhu created HADOOP-17619: --- Summary: Fix DelegationTokenRenewer#updateRenewalTime java doc error. Key: HADOOP-17619 URL: https://issues.apache.org/jira/browse/HADOOP-17619 Project: Hadoop Common Issue Type: Bug Reporter: Qi Zhu The param of updateRenewalTime should be the renew cycle. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17617) Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file
[ https://issues.apache.org/jira/browse/HADOOP-17617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravuri Sushma sree updated HADOOP-17617: Attachment: HADOOP-17617.001.patch Status: Patch Available (was: Open) > Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm > file > > > Key: HADOOP-17617 > URL: https://issues.apache.org/jira/browse/HADOOP-17617 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17617.001.patch > > > Format of RESPONSE of Get Key Versions in KMS index.md.vm is incorrect > https://hadoop.apache.org/docs/r3.1.1/hadoop-kms/index.html#Get_Key_Versions -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17536) Suport for customer provided encrption key
[ https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=574858=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574858 ] ASF GitHub Bot logged work on HADOOP-17536: --- Author: ASF GitHub Bot Created on: 31/Mar/21 13:40 Start Date: 31/Mar/21 13:40 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#discussion_r604906591 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -536,6 +594,7 @@ public AbfsRestOperation setPathProperties(final String path, final String prope public AbfsRestOperation getPathStatus(final String path, final boolean includeProperties) throws AzureBlobFileSystemException { Review comment: Only one setPathProperties exists -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574858) Time Spent: 3.5h (was: 3h 20m) > Suport for customer provided encrption key > -- > > Key: HADOOP-17536 > URL: https://issues.apache.org/jira/browse/HADOOP-17536 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > Labels: pull-request-available > Time Spent: 3.5h > Remaining Estimate: 0h > > The data for a particular customer needs to be encrypted on account level. At > server side the APIs will start accepting the encryption key as part of > request headers. The data will be encrypted/decrypted with the given key at > the server. > Since the ABFS FileSystem APIs are implementations for Hadoop FileSystem APIs > there is no direct way with which customer can pass the key to ABFS driver. > In this case driver should have the following capabilities so that it can > accept and pass the encryption key as one of the request headers. > # There should be a way to configure the encryption key for different > accounts. > # If there is a key specified for a particular account, the same needs to be > sent along with the request headers. > *Config changes* > They key for an account can be specified in the core-site as follows. > fs.azure.account.client-provided-encryption-key.{account > name}.dfs.core.windows.net -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17536) Suport for customer provided encrption key
[ https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=574859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574859 ] ASF GitHub Bot logged work on HADOOP-17536: --- Author: ASF GitHub Bot Created on: 31/Mar/21 13:40 Start Date: 31/Mar/21 13:40 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#discussion_r604906761 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -512,6 +569,7 @@ public AbfsRestOperation flush(final String path, final long position, boolean r public AbfsRestOperation setPathProperties(final String path, final String properties) Review comment: Only one setPathProperties exists -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574859) Time Spent: 3h 40m (was: 3.5h) > Suport for customer provided encrption key > -- > > Key: HADOOP-17536 > URL: https://issues.apache.org/jira/browse/HADOOP-17536 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > > The data for a particular customer needs to be encrypted on account level. At > server side the APIs will start accepting the encryption key as part of > request headers. The data will be encrypted/decrypted with the given key at > the server. > Since the ABFS FileSystem APIs are implementations for Hadoop FileSystem APIs > there is no direct way with which customer can pass the key to ABFS driver. > In this case driver should have the following capabilities so that it can > accept and pass the encryption key as one of the request headers. > # There should be a way to configure the encryption key for different > accounts. > # If there is a key specified for a particular account, the same needs to be > sent along with the request headers. > *Config changes* > They key for an account can be specified in the core-site as follows. > fs.azure.account.client-provided-encryption-key.{account > name}.dfs.core.windows.net -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17536) Suport for customer provided encrption key
[ https://issues.apache.org/jira/browse/HADOOP-17536?focusedWorklogId=574860=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574860 ] ASF GitHub Bot logged work on HADOOP-17536: --- Author: ASF GitHub Bot Created on: 31/Mar/21 13:40 Start Date: 31/Mar/21 13:40 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#discussion_r604906591 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -536,6 +594,7 @@ public AbfsRestOperation setPathProperties(final String path, final String prope public AbfsRestOperation getPathStatus(final String path, final boolean includeProperties) throws AzureBlobFileSystemException { Review comment: Only one getPathStatus exists -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574860) Time Spent: 3h 50m (was: 3h 40m) > Suport for customer provided encrption key > -- > > Key: HADOOP-17536 > URL: https://issues.apache.org/jira/browse/HADOOP-17536 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > Labels: pull-request-available > Time Spent: 3h 50m > Remaining Estimate: 0h > > The data for a particular customer needs to be encrypted on account level. At > server side the APIs will start accepting the encryption key as part of > request headers. The data will be encrypted/decrypted with the given key at > the server. > Since the ABFS FileSystem APIs are implementations for Hadoop FileSystem APIs > there is no direct way with which customer can pass the key to ABFS driver. > In this case driver should have the following capabilities so that it can > accept and pass the encryption key as one of the request headers. > # There should be a way to configure the encryption key for different > accounts. > # If there is a key specified for a particular account, the same needs to be > sent along with the request headers. > *Config changes* > They key for an account can be specified in the core-site as follows. > fs.azure.account.client-provided-encryption-key.{account > name}.dfs.core.windows.net -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key
bilaharith commented on a change in pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#discussion_r604906591 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -536,6 +594,7 @@ public AbfsRestOperation setPathProperties(final String path, final String prope public AbfsRestOperation getPathStatus(final String path, final boolean includeProperties) throws AzureBlobFileSystemException { Review comment: Only one getPathStatus exists -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key
bilaharith commented on a change in pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#discussion_r604906761 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -512,6 +569,7 @@ public AbfsRestOperation flush(final String path, final long position, boolean r public AbfsRestOperation setPathProperties(final String path, final String properties) Review comment: Only one setPathProperties exists -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2707: HADOOP-17536. ABFS: Supporting customer provided encryption key
bilaharith commented on a change in pull request #2707: URL: https://github.com/apache/hadoop/pull/2707#discussion_r604906591 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java ## @@ -536,6 +594,7 @@ public AbfsRestOperation setPathProperties(final String path, final String prope public AbfsRestOperation getPathStatus(final String path, final boolean includeProperties) throws AzureBlobFileSystemException { Review comment: Only one setPathProperties exists -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
hadoop-yetus commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-811072009 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 14s | | trunk passed | | +1 :green_heart: | compile | 5m 11s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 46s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 1s | | trunk passed | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 56s | | the patch passed | | +1 :green_heart: | compile | 5m 38s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 38s | | the patch passed | | +1 :green_heart: | compile | 4m 39s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 7s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/5/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 4 new + 225 unchanged - 9 fixed = 229 total (was 234) | | +1 :green_heart: | mvnsite | 1m 48s | | the patch passed | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 43s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 23s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 371m 41s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 23m 17s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | | The patch does not generate ASF License warnings. | | | | 514m 55s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestLeaseRecovery | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | |
[GitHub] [hadoop] virajjasani commented on pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
virajjasani commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-811061009 Failed tests don't seem relevant. FYI @liuml07 @ayushtkn -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=574817=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-574817 ] ASF GitHub Bot logged work on HADOOP-17618: --- Author: ASF GitHub Bot Created on: 31/Mar/21 12:57 Start Date: 31/Mar/21 12:57 Worklog Time Spent: 10m Work Description: sumangala-patki opened a new pull request #2845: URL: https://github.com/apache/hadoop/pull/2845 Delegation SAS tokens are created using various parameters for specifying details such as permissions and validity. The requests are logged, along with values of all the query parameters. This change will partially mask the values of the following object IDs representing the security principal: `skoid`, `saoid`, `suoid` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 574817) Remaining Estimate: 0h Time Spent: 10m > ABFS: Partially obfuscate SAS object IDs in Logs > > > Key: HADOOP-17618 > URL: https://issues.apache.org/jira/browse/HADOOP-17618 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Delegation SAS tokens are created using various parameters for specifying > details such as permissions and validity. The requests are logged, along with > values of all the query parameters. This change will partially mask the > values of the following object IDs representing the security principal: > skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17618: Labels: pull-request-available (was: ) > ABFS: Partially obfuscate SAS object IDs in Logs > > > Key: HADOOP-17618 > URL: https://issues.apache.org/jira/browse/HADOOP-17618 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Delegation SAS tokens are created using various parameters for specifying > details such as permissions and validity. The requests are logged, along with > values of all the query parameters. This change will partially mask the > values of the following object IDs representing the security principal: > skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sumangala-patki opened a new pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs
sumangala-patki opened a new pull request #2845: URL: https://github.com/apache/hadoop/pull/2845 Delegation SAS tokens are created using various parameters for specifying details such as permissions and validity. The requests are logged, along with values of all the query parameters. This change will partially mask the values of the following object IDs representing the security principal: `skoid`, `saoid`, `suoid` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on pull request #2838: HDFS-15937. Reduce memory used during datanode layout upgrade
Hexiaoqiao commented on pull request #2838: URL: https://github.com/apache/hadoop/pull/2838#issuecomment-811026690 Thanks @sodonnel and @jojochuang for your detailed comments. It makes sense to me. +1 from my side. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17612) Bump default Zookeeper version to 3.7.0
[ https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17312297#comment-17312297 ] Viraj Jasani commented on HADOOP-17612: --- Ah I see, bumping Zookeeper to 3.7 might not be doable sooner than what I expected because we use curator and so far, curator supports 3.6 as max version of Zookeeper. There are some ZookeeperServer level refactoring done in 3.7 release due to which curator too needs some minor code changes in order to support 3.7 version. We might have to hold on to this until we have new curator release that supports 3.7. > Bump default Zookeeper version to 3.7.0 > --- > > Key: HADOOP-17612 > URL: https://issues.apache.org/jira/browse/HADOOP-17612 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > We can bump Zookeeper version to 3.7.0 for trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #2837: HDFS-15938. Fix java doc in FSEditLog
tomscut commented on pull request #2837: URL: https://github.com/apache/hadoop/pull/2837#issuecomment-810987001 > LGTM Thanks @aajisaka for your review. Those failed unit tests are unrelated to the change, and they work fine locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
[ https://issues.apache.org/jira/browse/HADOOP-17618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumangala Patki reassigned HADOOP-17618: Assignee: Sumangala Patki > ABFS: Partially obfuscate SAS object IDs in Logs > > > Key: HADOOP-17618 > URL: https://issues.apache.org/jira/browse/HADOOP-17618 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > > Delegation SAS tokens are created using various parameters for specifying > details such as permissions and validity. The requests are logged, along with > values of all the query parameters. This change will partially mask the > values of the following object IDs representing the security principal: > skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs
Sumangala Patki created HADOOP-17618: Summary: ABFS: Partially obfuscate SAS object IDs in Logs Key: HADOOP-17618 URL: https://issues.apache.org/jira/browse/HADOOP-17618 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.3.1 Reporter: Sumangala Patki Delegation SAS tokens are created using various parameters for specifying details such as permissions and validity. The requests are logged, along with values of all the query parameters. This change will partially mask the values of the following object IDs representing the security principal: skoid, saoid, suoid -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org