[jira] [Commented] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly
[ https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804630#comment-16804630 ] Hudson commented on HADOOP-16199: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16302 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16302/]) HADOOP-16199. KMSLoadBlanceClientProvider does not select token (github: rev f41f938b2e498161da96bfad77410871a3a85728) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/kms/TestLoadBalancingKMSClientProvider.java > KMSLoadBlanceClientProvider does not select token correctly > --- > > Key: HADOOP-16199 > URL: https://issues.apache.org/jira/browse/HADOOP-16199 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.2 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: kms > Fix For: 3.3.0 > > > After HADOOP-14445 and HADOOP-15997, there are still cases where > KMSLoadBlanceClientProvider does not select token correctly. > Here is the use case: > The new configuration key > hadoop.security.kms.client.token.use.uri.format=true is set cross all the > cluster, including both Submitter and Yarn RM(renewer), which is not covered > in the test matrix in this [HADOOP-14445 > comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761]. > I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and > [~jojochuang]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
xiaoyuyao commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#issuecomment-477868451 Actually we usually only need one PR for trunk and then cherry-pick the change to ozone-0.4. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16199) KMSLoadBlanceClientProvider does not select token correctly
[ https://issues.apache.org/jira/browse/HADOOP-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-16199: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Thanks [~jojochuang] for the review. I just commit the patch to trunk. Will backport to 3.2, 3.1, 3.0 where HADOOP-14445 is included. > KMSLoadBlanceClientProvider does not select token correctly > --- > > Key: HADOOP-16199 > URL: https://issues.apache.org/jira/browse/HADOOP-16199 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.2 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: kms > Fix For: 3.3.0 > > > After HADOOP-14445 and HADOOP-15997, there are still cases where > KMSLoadBlanceClientProvider does not select token correctly. > Here is the use case: > The new configuration key > hadoop.security.kms.client.token.use.uri.format=true is set cross all the > cluster, including both Submitter and Yarn RM(renewer), which is not covered > in the test matrix in this [HADOOP-14445 > comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16505761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16505761]. > I will post the debug log and the proposed fix shortly, cc: [~xiaochen] and > [~jojochuang]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
adoroszlai commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#issuecomment-477867857 > +1. Just notice #660 is for ozone-0.4 and #659 is for trunk. > Why we need different dependencies? Nice catch. I was using different compose files for different branches. We need the other 2 dependencies for jdk11 for trunk, too. Pushed additional commit to the other PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao merged pull request #642: HADOOP-16199. KMSLoadBlanceClientProvider does not select token correctly. Contributed by Xiaoyu Yao.
xiaoyuyao merged pull request #642: HADOOP-16199. KMSLoadBlanceClientProvider does not select token correctly. Contributed by Xiaoyu Yao. URL: https://github.com/apache/hadoop/pull/642 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao edited a comment on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
xiaoyuyao edited a comment on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#issuecomment-477848297 +1. Just notice #660 is for ozone-0.4 and #659 is for trunk. Why we need different dependencies? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
xiaoyuyao commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#issuecomment-477848297 +1 I will commit this shortly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804530#comment-16804530 ] Hadoop QA commented on HADOOP-16219: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 26s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 34s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 40s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 5s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 7s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 42s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 15s{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 1s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 1s{color} | {color:red} root-jdk1.8.0_191 with JDK v1.8.0_191 generated 1 new + 1344 unchanged - 1 fixed = 1345 total (was 1345) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 4s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 29s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 55s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}220m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.registry.secure.TestSecureLogins | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:da67579 | | JIRA Issue | HADOOP-16219 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964096/HADOOP-16219-branch-2-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 35e5c5887c31 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Li
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270256924 ## File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot ## @@ -13,9 +13,15 @@ # See the License for the specific language governing permissions and # limitations under the License. -*** Keywords *** +*** Settings *** +Library OperatingSystem +Library String +Library BuiltIn +*** Variables *** Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270256929 ## File path: hadoop-ozone/dist/src/main/smoketest/test.sh ## @@ -72,13 +72,20 @@ execute_tests(){ docker-compose -f "$COMPOSE_FILE" down docker-compose -f "$COMPOSE_FILE" up -d --scale datanode=3 wait_for_datanodes "$COMPOSE_FILE" + + if [ ${COMPOSE_DIR} == "ozonesecure" ]; then + SECURITY_ENABLED="true" + else + SECURITY_ENABLED="false" Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#issuecomment-477833013 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 25 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 963 | trunk passed | | +1 | compile | 23 | trunk passed | | +1 | mvnsite | 24 | trunk passed | | +1 | shadedclient | 644 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 16 | trunk passed | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 18 | dist in the patch failed. | | +1 | compile | 20 | the patch passed | | +1 | javac | 20 | the patch passed | | +1 | mvnsite | 20 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | shelldocs | 13 | There were no new shelldocs issues. | | -1 | whitespace | 0 | The patch 3 line(s) with tabs. | | +1 | shadedclient | 730 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 16 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 21 | dist in the patch passed. | | +1 | asflicense | 26 | The patch does not generate ASF License warnings. | | | | 2681 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/632 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient shellcheck shelldocs | | uname | Linux 62659a6c688e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/artifact/out/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/testReport/ | | Max. process+thread count | 445 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270256926 ## File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-s3.robot ## @@ -0,0 +1,44 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +*** Settings *** +Documentation Smoke test to start cluster with docker-compose environments. +Library OperatingSystem +Library String +Library BuiltIn +Resource../commonlib.robot +Resource../s3/commonawslib.robot + +*** Variables *** +${ENDPOINT_URL} http://s3g:9878 + +*** Keywords *** +Setup volume names +${random}Generate Random String 2 [NUMBERS] +Set Suite Variable ${volume1}fstest${random} +Set Suite Variable ${volume2}fstest2${random} + +*** Test Cases *** +Secure S3 test Success +Run Keyword Setup s3 tests +${output} = Execute aws s3api --endpoint-url ${ENDPOINT_URL} create-bucket --bucket bucket-test123 +${output} = Execute aws s3api --endpoint-url ${ENDPOINT_URL} list-buckets +Should contain ${output} bucket-test123 + +Secure S3 test Failure Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16208) Do Not Log InterruptedException in Client
[ https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804508#comment-16804508 ] Hadoop QA commented on HADOOP-16208: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 30s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 50s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 50s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 0m 32s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 27s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16208 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964113/HADOOP-16208.2.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c7f90d6091ac 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/16086/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/16086/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/16086/artifact/out/patch-compile-root.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/16086/arti
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM
hadoop-yetus commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM URL: https://github.com/apache/hadoop/pull/651#discussion_r270255127 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java ## @@ -534,4 +536,84 @@ public void testReadRequest() throws Exception { proxyProvider.getCurrentProxyOMNodeId()); } } + + @Test + public void testOMRatisSnapshot() throws Exception { +String userName = "user" + RandomStringUtils.randomNumeric(5); +String adminName = "admin" + RandomStringUtils.randomNumeric(5); +String volumeName = "volume" + RandomStringUtils.randomNumeric(5); +String bucketName = "bucket" + RandomStringUtils.randomNumeric(5); + +VolumeArgs createVolumeArgs = VolumeArgs.newBuilder() +.setOwner(userName) +.setAdmin(adminName) +.build(); + +objectStore.createVolume(volumeName, createVolumeArgs); +OzoneVolume retVolumeinfo = objectStore.getVolume(volumeName); + +retVolumeinfo.createBucket(bucketName); +OzoneBucket ozoneBucket = retVolumeinfo.getBucket(bucketName); + +String leaderOMNodeId = objectStore.getClientProxy().getOMProxyProvider() +.getCurrentProxyOMNodeId(); +OzoneManager ozoneManager = cluster.getOzoneManager(leaderOMNodeId); + +// Send commands to ratis to increase the log index so that ratis +// triggers a snapshot on the state machine. + +long appliedLogIndex = 0; +while (appliedLogIndex <= SNAPSHOT_THRESHOLD) { + createKey(ozoneBucket); + appliedLogIndex = ozoneManager.getOmRatisServer() + .getStateMachineLastAppliedIndex(); +} + +GenericTestUtils.waitFor(() -> { + if (ozoneManager.loadRatisSnapshotIndex() > 0) { +return true; + } + return false; +}, 1000, 10); + +// The current lastAppliedLogIndex on the state machine should be greater +// than or equal to the saved snapshot index. +long smLastAppliedIndex = +ozoneManager.getOmRatisServer().getStateMachineLastAppliedIndex(); +long ratisSnapshotIndex = ozoneManager.loadRatisSnapshotIndex(); +Assert.assertTrue("LastAppliedIndex on OM State Machine (" ++ smLastAppliedIndex + ") is less than the saved snapshot index(" ++ ratisSnapshotIndex + ").", +smLastAppliedIndex >= ratisSnapshotIndex); + +// Add more transactions to Ratis to trigger another snapshot +while (appliedLogIndex <= (smLastAppliedIndex + SNAPSHOT_THRESHOLD)) { + createKey(ozoneBucket); + appliedLogIndex = ozoneManager.getOmRatisServer() + .getStateMachineLastAppliedIndex(); +} + +GenericTestUtils.waitFor(() -> { + if (ozoneManager.loadRatisSnapshotIndex() > 0) { +return true; + } + return false; +}, 1000, 10); + +// The new snapshot index must be greater than the previous snapshot index +long ratisSnapshotIndexNew = ozoneManager.loadRatisSnapshotIndex(); +Assert.assertTrue("Latest snapshot index must be greater than previous " + +"snapshot indices", ratisSnapshotIndexNew > ratisSnapshotIndex); Review comment: whitespace:end of line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #651: HDDS-1339. Implement ratis snapshots on OM
hadoop-yetus commented on issue #651: HDDS-1339. Implement ratis snapshots on OM URL: https://github.com/apache/hadoop/pull/651#issuecomment-477830367 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 23 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 22 | Maven dependency ordering for branch | | +1 | mvninstall | 975 | trunk passed | | +1 | compile | 936 | trunk passed | | +1 | checkstyle | 229 | trunk passed | | -1 | mvnsite | 54 | integration-test in trunk failed. | | +1 | shadedclient | 1140 | branch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/integration-test | | +1 | findbugs | 196 | trunk passed | | +1 | javadoc | 160 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 24 | Maven dependency ordering for patch | | -1 | mvninstall | 25 | integration-test in the patch failed. | | +1 | compile | 881 | the patch passed | | +1 | javac | 881 | the patch passed | | +1 | checkstyle | 188 | the patch passed | | +1 | mvnsite | 170 | the patch passed | | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 673 | patch has no errors when building and testing our client artifacts. | | 0 | findbugs | 0 | Skipped patched modules with no Java source: hadoop-ozone/integration-test | | +1 | findbugs | 223 | the patch passed | | +1 | javadoc | 159 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 74 | common in the patch passed. | | +1 | unit | 47 | common in the patch passed. | | -1 | unit | 597 | integration-test in the patch failed. | | +1 | unit | 57 | ozone-manager in the patch passed. | | +1 | asflicense | 51 | The patch does not generate ASF License warnings. | | | | 7018 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/651 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux d097c6508e41 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/whitespace-eol.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/testReport/ | | Max. process+thread count | 4099 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components
[ https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804492#comment-16804492 ] Hadoop QA commented on HADOOP-16214: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 31s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 44s{color} | {color:green} hadoop-auth in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 91m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16214 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964106/HADOOP-16214.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b23e9a18bdfa 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16085/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-auth U: hadoop-common-project/hadoop-auth | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16085/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Kerberos name implementation in Hadoop does not accept principals with more > than two components > -
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270246025 ## File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-s3.robot ## @@ -0,0 +1,44 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +*** Settings *** +Documentation Smoke test to start cluster with docker-compose environments. +Library OperatingSystem +Library String +Library BuiltIn +Resource../commonlib.robot +Resource../s3/commonawslib.robot + +*** Variables *** +${ENDPOINT_URL} http://s3g:9878 + +*** Keywords *** +Setup volume names +${random}Generate Random String 2 [NUMBERS] +Set Suite Variable ${volume1}fstest${random} +Set Suite Variable ${volume2}fstest2${random} + +*** Test Cases *** +Secure S3 test Success +Run Keyword Setup s3 tests +${output} = Execute aws s3api --endpoint-url ${ENDPOINT_URL} create-bucket --bucket bucket-test123 +${output} = Execute aws s3api --endpoint-url ${ENDPOINT_URL} list-buckets +Should contain ${output} bucket-test123 + +Secure S3 test Failure Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270246027 ## File path: hadoop-ozone/dist/src/main/smoketest/test.sh ## @@ -72,13 +72,20 @@ execute_tests(){ docker-compose -f "$COMPOSE_FILE" down docker-compose -f "$COMPOSE_FILE" up -d --scale datanode=3 wait_for_datanodes "$COMPOSE_FILE" + + if [ "${COMPOSE_DIR}" == "ozonesecure" ]; then + SECURITY_ENABLED="true" + else + SECURITY_ENABLED="false" Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request
hadoop-yetus commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request URL: https://github.com/apache/hadoop/pull/606#issuecomment-477818995 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 27 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 971 | trunk passed | | +1 | compile | 33 | trunk passed | | +1 | checkstyle | 20 | trunk passed | | +1 | mvnsite | 36 | trunk passed | | +1 | shadedclient | 686 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 65 | trunk passed | | +1 | javadoc | 25 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 30 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 1 new + 10 unchanged - 0 fixed = 11 total (was 10) | | +1 | mvnsite | 32 | the patch passed | | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | shadedclient | 719 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 44 | the patch passed | | +1 | javadoc | 25 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 280 | hadoop-aws in the patch passed. | | +1 | asflicense | 25 | The patch does not generate ASF License warnings. | | | | 3153 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-606/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/606 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 396d24996991 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-606/6/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-606/6/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-606/6/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-606/6/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request
hadoop-yetus commented on a change in pull request #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request URL: https://github.com/apache/hadoop/pull/606#discussion_r270246108 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/RemoteFileChangedException.java ## @@ -32,6 +32,12 @@ @InterfaceStability.Unstable public class RemoteFileChangedException extends PathIOException { + /** + * Error message used when mapping a 412 to this exception. + */ Review comment: whitespace:end of line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#issuecomment-477818901 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 23 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 964 | trunk passed | | +1 | compile | 25 | trunk passed | | +1 | mvnsite | 39 | trunk passed | | +1 | shadedclient | 608 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 18 | trunk passed | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 18 | dist in the patch failed. | | +1 | compile | 17 | the patch passed | | +1 | javac | 17 | the patch passed | | +1 | mvnsite | 18 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | shelldocs | 13 | There were no new shelldocs issues. | | -1 | whitespace | 0 | The patch 3 line(s) with tabs. | | +1 | shadedclient | 669 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 18 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 20 | dist in the patch passed. | | +1 | asflicense | 25 | The patch does not generate ASF License warnings. | | | | 2607 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/632 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient shellcheck shelldocs | | uname | Linux dbbc27fea525 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/artifact/out/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270246020 ## File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot ## @@ -13,9 +13,15 @@ # See the License for the specific language governing permissions and # limitations under the License. -*** Keywords *** +*** Settings *** +Library OperatingSystem +Library String +Library BuiltIn +*** Variables *** Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client
[ https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-16208: Status: Open (was: Patch Available) > Do Not Log InterruptedException in Client > - > > Key: HADOOP-16208 > URL: https://issues.apache.org/jira/browse/HADOOP-16208 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-16208.1.patch, HADOOP-16208.2.patch > > > {code:java} > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > LOG.warn("interrupted waiting to send rpc request to server", e); > throw new IOException(e); > } > {code} > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450 > I'm working on a project that uses an {{ExecutorService}} to launch a bunch > of threads. Each thread spins up an HDFS client connection. At any point in > time, the program can terminate and call {{ExecutorService#shutdownNow()}} to > forcibly close vis-à-vis {{Thread#interrupt()}}. At that point, I get a > cascade of logging from the above code and there's no easy to way to turn it > off. > "Log and throw" is generally frowned upon, just throw the {{Exception}} and > move on. > https://community.oracle.com/docs/DOC-983543#logAndThrow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16208) Do Not Log InterruptedException in Client
[ https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804468#comment-16804468 ] David Mollitor commented on HADOOP-16208: - Thanks [~ste...@apache.org] for pointing me at it. New patch supplied. I changed it up a little. Let me know what you think. > Do Not Log InterruptedException in Client > - > > Key: HADOOP-16208 > URL: https://issues.apache.org/jira/browse/HADOOP-16208 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-16208.1.patch, HADOOP-16208.2.patch > > > {code:java} > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > LOG.warn("interrupted waiting to send rpc request to server", e); > throw new IOException(e); > } > {code} > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450 > I'm working on a project that uses an {{ExecutorService}} to launch a bunch > of threads. Each thread spins up an HDFS client connection. At any point in > time, the program can terminate and call {{ExecutorService#shutdownNow()}} to > forcibly close vis-à-vis {{Thread#interrupt()}}. At that point, I get a > cascade of logging from the above code and there's no easy to way to turn it > off. > "Log and throw" is generally frowned upon, just throw the {{Exception}} and > move on. > https://community.oracle.com/docs/DOC-983543#logAndThrow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client
[ https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-16208: Status: Patch Available (was: Open) > Do Not Log InterruptedException in Client > - > > Key: HADOOP-16208 > URL: https://issues.apache.org/jira/browse/HADOOP-16208 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-16208.1.patch, HADOOP-16208.2.patch > > > {code:java} > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > LOG.warn("interrupted waiting to send rpc request to server", e); > throw new IOException(e); > } > {code} > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450 > I'm working on a project that uses an {{ExecutorService}} to launch a bunch > of threads. Each thread spins up an HDFS client connection. At any point in > time, the program can terminate and call {{ExecutorService#shutdownNow()}} to > forcibly close vis-à-vis {{Thread#interrupt()}}. At that point, I get a > cascade of logging from the above code and there's no easy to way to turn it > off. > "Log and throw" is generally frowned upon, just throw the {{Exception}} and > move on. > https://community.oracle.com/docs/DOC-983543#logAndThrow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16208) Do Not Log InterruptedException in Client
[ https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-16208: Attachment: HADOOP-16208.2.patch > Do Not Log InterruptedException in Client > - > > Key: HADOOP-16208 > URL: https://issues.apache.org/jira/browse/HADOOP-16208 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-16208.1.patch, HADOOP-16208.2.patch > > > {code:java} > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > LOG.warn("interrupted waiting to send rpc request to server", e); > throw new IOException(e); > } > {code} > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450 > I'm working on a project that uses an {{ExecutorService}} to launch a bunch > of threads. Each thread spins up an HDFS client connection. At any point in > time, the program can terminate and call {{ExecutorService#shutdownNow()}} to > forcibly close vis-à-vis {{Thread#interrupt()}}. At that point, I get a > cascade of logging from the above code and there's no easy to way to turn it > off. > "Log and throw" is generally frowned upon, just throw the {{Exception}} and > move on. > https://community.oracle.com/docs/DOC-983543#logAndThrow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804465#comment-16804465 ] Steve Loughran commented on HADOOP-16219: - > just to be clear, this flies directly in the face of our compatibility >guidelines by being an incompatible change in a minor version release, right? well it would be, if we didn't explicitly call out JVM EOL as something that can force an update https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Policies > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-16219-branch-2-001.patch > > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15652) Fix typos SPENGO into SPNEGO
[ https://issues.apache.org/jira/browse/HADOOP-15652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-15652. - Resolution: Fixed Fix Version/s: 3.3.0 +1, committed. thanks! > Fix typos SPENGO into SPNEGO > > > Key: HADOOP-15652 > URL: https://issues.apache.org/jira/browse/HADOOP-15652 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: okumin >Assignee: okumin >Priority: Trivial > Fix For: 3.3.0 > > > There are some typo words `SPENGO` which should be `SPNEGO`. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15652) Fix typos SPENGO into SPNEGO
[ https://issues.apache.org/jira/browse/HADOOP-15652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-15652: --- Assignee: okumin > Fix typos SPENGO into SPNEGO > > > Key: HADOOP-15652 > URL: https://issues.apache.org/jira/browse/HADOOP-15652 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: okumin >Assignee: okumin >Priority: Trivial > > There are some typo words `SPENGO` which should be `SPNEGO`. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #408: HADOOP-15652. Fix typos SPENGO into SPNEGO
steveloughran closed pull request #408: HADOOP-15652. Fix typos SPENGO into SPNEGO URL: https://github.com/apache/hadoop/pull/408 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #408: HADOOP-15652. Fix typos SPENGO into SPNEGO
steveloughran commented on issue #408: HADOOP-15652. Fix typos SPENGO into SPNEGO URL: https://github.com/apache/hadoop/pull/408#issuecomment-477814621 +1, committing. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #460: Hadoop-15994
steveloughran closed pull request #460: Hadoop-15994 URL: https://github.com/apache/hadoop/pull/460 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #413: Adding EC2ContainerCredentialsProviderWrapper to the credential providers
steveloughran commented on issue #413: Adding EC2ContainerCredentialsProviderWrapper to the credential providers URL: https://github.com/apache/hadoop/pull/413#issuecomment-477811705 Can you create an ASF JIRA for this under HADOOP-15620; we don't directly look at the PR list, so things like this get lost. now, regarding the patch, I don't think it is directly neededyou can just declare that as the credential provider to be `com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper` and it will picked up automatically. We just do the default set so that most people don't need to worry about it (and we've just added the session settings to the list). How about you add something to the S3A docs telling people what to do? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #432: Update committers.md
steveloughran commented on issue #432: Update committers.md URL: https://github.com/apache/hadoop/pull/432#issuecomment-477811080 Just noticed this. did you file an apache JIRA for it? Its how we manage changes This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #440: HADOOP-15910 Fix Javadoc for LdapAuthenticationHandler#ENABLE_START_TLS
steveloughran commented on issue #440: HADOOP-15910 Fix Javadoc for LdapAuthenticationHandler#ENABLE_START_TLS URL: https://github.com/apache/hadoop/pull/440#issuecomment-477810777 +1 from me. Before I merge it, what name do you want to be credited by? Add it in the description field so when I press "merge" it will go in. Git will record you as the author, but with cherrypicking across branches, its easy for the original author to get lost This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #515: HADOOP-16134 001- initial design of a WriteOperationsContext
steveloughran closed pull request #515: HADOOP-16134 001- initial design of a WriteOperationsContext URL: https://github.com/apache/hadoop/pull/515 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #515: HADOOP-16134 001- initial design of a WriteOperationsContext
steveloughran commented on issue #515: HADOOP-16134 001- initial design of a WriteOperationsContext URL: https://github.com/apache/hadoop/pull/515#issuecomment-477809831 closing this for now; doing a different refactoring This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request
steveloughran commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request URL: https://github.com/apache/hadoop/pull/606#issuecomment-477809025 S3 ireland, s3guard+ ddb + auth. Apart from the DynamoDB per-request billing test failures covered in #647, all good This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on issue #651: HDDS-1339. Implement ratis snapshots on OM
hanishakoneru commented on issue #651: HDDS-1339. Implement ratis snapshots on OM URL: https://github.com/apache/hadoop/pull/651#issuecomment-477808698 Thank you Bharat for the review. I have updated the patch to address your comments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM
hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM URL: https://github.com/apache/hadoop/pull/651#discussion_r270236900 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java ## @@ -534,4 +536,61 @@ public void testReadRequest() throws Exception { proxyProvider.getCurrentProxyOMNodeId()); } } + + @Test + public void testOMRatisSnapshot() throws Exception { +String userName = "user" + RandomStringUtils.randomNumeric(5); +String adminName = "admin" + RandomStringUtils.randomNumeric(5); +String volumeName = "volume" + RandomStringUtils.randomNumeric(5); +String bucketName = "bucket" + RandomStringUtils.randomNumeric(5); + +VolumeArgs createVolumeArgs = VolumeArgs.newBuilder() +.setOwner(userName) +.setAdmin(adminName) +.build(); + +objectStore.createVolume(volumeName, createVolumeArgs); +OzoneVolume retVolumeinfo = objectStore.getVolume(volumeName); + +retVolumeinfo.createBucket(bucketName); +OzoneBucket ozoneBucket = retVolumeinfo.getBucket(bucketName); + +String leaderOMNodeId = objectStore.getClientProxy().getOMProxyProvider() +.getCurrentProxyOMNodeId(); +OzoneManager ozoneManager = cluster.getOzoneManager(leaderOMNodeId); + +// Send commands to ratis to increase the log index so that ratis +// triggers a snapshot on the state machine. + +long appliedLogIndex = 0; +while (appliedLogIndex <= SNAPSHOT_THRESHOLD) { + String keyName = "key" + RandomStringUtils.randomNumeric(5); + String data = "data" + RandomStringUtils.randomNumeric(5); + OzoneOutputStream ozoneOutputStream = ozoneBucket.createKey(keyName, + data.length(), ReplicationType.STAND_ALONE, + ReplicationFactor.ONE, new HashMap<>()); + ozoneOutputStream.write(data.getBytes(), 0, data.length()); + ozoneOutputStream.close(); + + appliedLogIndex = ozoneManager.getOmRatisServer() + .getStateMachineLastAppliedIndex(); +} + +GenericTestUtils.waitFor(() -> { + if (ozoneManager.loadRatisSnapshotIndex() > 0) { +return true; + } Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM
hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM URL: https://github.com/apache/hadoop/pull/651#discussion_r270236979 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java ## @@ -308,56 +357,35 @@ private IOException constructExceptionForFailedRequest( STATUS_CODE + omResponse.getStatus()); } - /* - * Apply a committed log entry to the state machine. - */ - @Override - public CompletableFuture applyTransaction(TransactionContext trx) { -try { - OMRequest request = OMRatisHelper.convertByteStringToOMRequest( - trx.getStateMachineLogEntry().getLogData()); - CompletableFuture future = CompletableFuture - .supplyAsync(() -> runCommand(request)); - return future; -} catch (IOException e) { - return completeExceptionally(e); -} - } - /** - * Query the state machine. The request must be read-only. + * Submits write request to OM and returns the response Message. + * @param request OMRequest + * @return response from OM + * @throws ServiceException */ - @Override - public CompletableFuture query(Message request) { -try { - OMRequest omRequest = OMRatisHelper.convertByteStringToOMRequest( - request.getContent()); - return CompletableFuture.completedFuture(runCommand(omRequest)); -} catch (IOException e) { - return completeExceptionally(e); + private Message runCommand(OMRequest request, long trxLogIndex) { +OMResponse response = handler.handle(request); +if (response.getSuccess()) { Review comment: done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM
hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM URL: https://github.com/apache/hadoop/pull/651#discussion_r270237034 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java ## @@ -115,7 +117,60 @@ public TransactionContext startTransaction( return ctxt; } return handleStartTransactionRequests(raftClientRequest, omRequest); + } + + /* + * Apply a committed log entry to the state machine. + */ + @Override + public CompletableFuture applyTransaction(TransactionContext trx) { +try { + OMRequest request = OMRatisHelper.convertByteStringToOMRequest( + trx.getStateMachineLogEntry().getLogData()); + long trxLogIndex = trx.getLogEntry().getIndex(); + CompletableFuture future = CompletableFuture + .supplyAsync(() -> runCommand(request, trxLogIndex)); + return future; +} catch (IOException e) { + return completeExceptionally(e); +} + } + + /** + * Query the state machine. The request must be read-only. + */ + @Override + public CompletableFuture query(Message request) { +try { + OMRequest omRequest = OMRatisHelper.convertByteStringToOMRequest( + request.getContent()); + return CompletableFuture.completedFuture(queryCommand(omRequest)); +} catch (IOException e) { + return completeExceptionally(e); +} + } + + /** + * Take OM Ratis snapshot. Write the snapshot index to file. Snapshot index + * is the log index corresponding to the last applied transaction on the OM + * State Machine. + * + * @return the last applied index on the state machine which has been + * stored in the snapshot file. + */ + @Override + public long takeSnapshot() throws IOException { +LOG.info("Saving Ratis snapshot on the OM."); +return ozoneManager.saveRatisSnapshot(); Review comment: done. flushing the DB before saving a snapshot. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM
hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM URL: https://github.com/apache/hadoop/pull/651#discussion_r270236979 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java ## @@ -308,56 +357,35 @@ private IOException constructExceptionForFailedRequest( STATUS_CODE + omResponse.getStatus()); } - /* - * Apply a committed log entry to the state machine. - */ - @Override - public CompletableFuture applyTransaction(TransactionContext trx) { -try { - OMRequest request = OMRatisHelper.convertByteStringToOMRequest( - trx.getStateMachineLogEntry().getLogData()); - CompletableFuture future = CompletableFuture - .supplyAsync(() -> runCommand(request)); - return future; -} catch (IOException e) { - return completeExceptionally(e); -} - } - /** - * Query the state machine. The request must be read-only. + * Submits write request to OM and returns the response Message. + * @param request OMRequest + * @return response from OM + * @throws ServiceException */ - @Override - public CompletableFuture query(Message request) { -try { - OMRequest omRequest = OMRatisHelper.convertByteStringToOMRequest( - request.getContent()); - return CompletableFuture.completedFuture(runCommand(omRequest)); -} catch (IOException e) { - return completeExceptionally(e); + private Message runCommand(OMRequest request, long trxLogIndex) { +OMResponse response = handler.handle(request); +if (response.getSuccess()) { Review comment: done. flushing the DB before saving a snapshot. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components
[ https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804440#comment-16804440 ] Eric Yang commented on HADOOP-16214: Patch 001 does not allow empty kerberos name. Patch 002 allows empty kerberos name, but this looks like a bug in existing Hadoop code base. > Kerberos name implementation in Hadoop does not accept principals with more > than two components > --- > > Key: HADOOP-16214 > URL: https://issues.apache.org/jira/browse/HADOOP-16214 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: Issac Buenrostro >Priority: Major > Attachments: HADOOP-16214.001.patch, HADOOP-16214.002.patch > > > org.apache.hadoop.security.authentication.util.KerberosName is in charge of > converting a Kerberos principal to a user name in Hadoop for all of the > services requiring authentication. > Although the Kerberos spec > ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html]) > allows for an arbitrary number of components in the principal, the Hadoop > implementation will throw a "Malformed Kerberos name:" error if the principal > has more than two components (because the regex can only read serviceName and > hostName). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components
[ https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated HADOOP-16214: --- Attachment: HADOOP-16214.002.patch > Kerberos name implementation in Hadoop does not accept principals with more > than two components > --- > > Key: HADOOP-16214 > URL: https://issues.apache.org/jira/browse/HADOOP-16214 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: Issac Buenrostro >Priority: Major > Attachments: HADOOP-16214.001.patch, HADOOP-16214.002.patch > > > org.apache.hadoop.security.authentication.util.KerberosName is in charge of > converting a Kerberos principal to a user name in Hadoop for all of the > services requiring authentication. > Although the Kerberos spec > ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html]) > allows for an arbitrary number of components in the principal, the Hadoop > implementation will throw a "Malformed Kerberos name:" error if the principal > has more than two components (because the regex can only read serviceName and > hostName). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
xiaoyuyao commented on a change in pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#discussion_r270234817 ## File path: hadoop-ozone/tools/pom.xml ## @@ -59,6 +59,18 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";> hadoop-hdfs compile + + com.sun.xml.bind + jaxb-core + + + javax.xml.bind Review comment: Agree, since you don't add jaxb-impl. We should be good. There is no need to exclude. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804236#comment-16804236 ] Gabor Bota edited comment on HADOOP-16219 at 3/28/19 11:20 PM: --- +1 (non-binding) for the idea was (Author: gabor.bota): +1 (non-binding) > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-16219-branch-2-001.patch > > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components
[ https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804422#comment-16804422 ] Hadoop QA commented on HADOOP-16214: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 32s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 12s{color} | {color:green} hadoop-auth in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16214 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964091/HADOOP-16214.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8d834e07e994 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16083/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-auth U: hadoop-common-project/hadoop-auth | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16083/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Kerberos name implementation in Hadoop does not accept principals with more > than two components > -
[GitHub] [hadoop] adoroszlai commented on a change in pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
adoroszlai commented on a change in pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#discussion_r270233396 ## File path: hadoop-ozone/tools/pom.xml ## @@ -59,6 +59,18 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";> hadoop-hdfs compile + + com.sun.xml.bind + jaxb-core + + + javax.xml.bind Review comment: Hi @xiaoyuyao, thanks for the review. This change adds 3 dependencies, but none of them is a transitive dependency via `hadoop-common`. Can you please clarify what needs to be excluded and why? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#issuecomment-477804402 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 26 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 987 | trunk passed | | +1 | compile | 68 | trunk passed | | +1 | mvnsite | 29 | trunk passed | | +1 | shadedclient | 673 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 23 | trunk passed | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 22 | dist in the patch failed. | | +1 | compile | 21 | the patch passed | | +1 | javac | 21 | the patch passed | | +1 | mvnsite | 22 | the patch passed | | +1 | shellcheck | 1 | There were no new shellcheck issues. | | +1 | shelldocs | 19 | There were no new shelldocs issues. | | -1 | whitespace | 0 | The patch 3 line(s) with tabs. | | +1 | shadedclient | 751 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 20 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 24 | dist in the patch passed. | | +1 | asflicense | 31 | The patch does not generate ASF License warnings. | | | | 2856 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/632 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient shellcheck shelldocs | | uname | Linux 9a5cd7534c76 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/artifact/out/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270233318 ## File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-s3.robot ## @@ -0,0 +1,44 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +*** Settings *** +Documentation Smoke test to start cluster with docker-compose environments. +Library OperatingSystem +Library String +Library BuiltIn +Resource../commonlib.robot +Resource../s3/commonawslib.robot + +*** Variables *** +${ENDPOINT_URL} http://s3g:9878 + +*** Keywords *** +Setup volume names +${random}Generate Random String 2 [NUMBERS] +Set Suite Variable ${volume1}fstest${random} +Set Suite Variable ${volume2}fstest2${random} + +*** Test Cases *** +Secure S3 test Success +Run Keyword Setup s3 tests +${output} = Execute aws s3api --endpoint-url ${ENDPOINT_URL} create-bucket --bucket bucket-test123 +${output} = Execute aws s3api --endpoint-url ${ENDPOINT_URL} list-buckets +Should contain ${output} bucket-test123 + +Secure S3 test Failure Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270233312 ## File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot ## @@ -13,9 +13,15 @@ # See the License for the specific language governing permissions and # limitations under the License. -*** Keywords *** +*** Settings *** +Library OperatingSystem +Library String +Library BuiltIn +*** Variables *** Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on a change in pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#discussion_r270233324 ## File path: hadoop-ozone/dist/src/main/smoketest/test.sh ## @@ -72,13 +72,20 @@ execute_tests(){ docker-compose -f "$COMPOSE_FILE" down docker-compose -f "$COMPOSE_FILE" up -d --scale datanode=3 wait_for_datanodes "$COMPOSE_FILE" + + if [ "${COMPOSE_DIR}" == "ozonesecure" ]; then + SECURITY_ENABLED="true" + else + SECURITY_ENABLED="false" Review comment: whitespace:tabs in line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804388#comment-16804388 ] Giovanni Matteo Fumarola commented on HADOOP-16219: --- Thanks [~ste...@apache.org] . Can you take a look of TestProcfsBasedProcessTree ? {code:java} memoryMappingList.add(constructMemoryMappingInfo( "7f56c177c000-7f56c177d000 " + "rw-p 0001 08:02 40371558 " + "/grid/0/jdk1.7.0_25/jre/lib/amd64/libnio.so", // Format: size, rss, pss, shared_clean, shared_dirty, private_clean // private_dirty, referenced, anon, anon-huge-pages, swap, // kernel_page_size, mmu_page_size new String[] {"4", "4", "25", "4", "25", "15", "10", "4", "10", "0", "0", "4", "4"})); {code} I do not know if this change breaks the test. > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-16219-branch-2-001.patch > > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.
hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar. URL: https://github.com/apache/hadoop/pull/632#issuecomment-477797510 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 23 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 997 | trunk passed | | +1 | compile | 36 | trunk passed | | +1 | mvnsite | 22 | trunk passed | | +1 | shadedclient | 637 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 21 | trunk passed | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 20 | dist in the patch failed. | | +1 | compile | 19 | the patch passed | | +1 | javac | 19 | the patch passed | | +1 | mvnsite | 20 | the patch passed | | -1 | shellcheck | 1 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | shelldocs | 16 | There were no new shelldocs issues. | | -1 | whitespace | 0 | The patch 3 line(s) with tabs. | | +1 | shadedclient | 720 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 19 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 24 | dist in the patch passed. | | +1 | asflicense | 31 | The patch does not generate ASF License warnings. | | | | 2742 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/632 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient shellcheck shelldocs | | uname | Linux d7bf174f633f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7a2f94 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/artifact/out/diff-patch-shellcheck.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/artifact/out/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804378#comment-16804378 ] Erik Krogen commented on HADOOP-16219: -- I'm not necessarily against this, but just to be clear, this flies directly in the face of our compatibility guidelines by being an incompatible change in a minor version release, right? > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-16219-branch-2-001.patch > > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16219: Assignee: Steve Loughran Status: Patch Available (was: Open) > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-16219-branch-2-001.patch > > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16219: Attachment: HADOOP-16219-branch-2-001.patch > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Priority: Major > Attachments: HADOOP-16219-branch-2-001.patch > > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16208) Do Not Log InterruptedException in Client
[ https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804370#comment-16804370 ] Steve Loughran commented on HADOOP-16208: - And to be clear, -1 until the exception is changed > Do Not Log InterruptedException in Client > - > > Key: HADOOP-16208 > URL: https://issues.apache.org/jira/browse/HADOOP-16208 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-16208.1.patch > > > {code:java} > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > LOG.warn("interrupted waiting to send rpc request to server", e); > throw new IOException(e); > } > {code} > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450 > I'm working on a project that uses an {{ExecutorService}} to launch a bunch > of threads. Each thread spins up an HDFS client connection. At any point in > time, the program can terminate and call {{ExecutorService#shutdownNow()}} to > forcibly close vis-à-vis {{Thread#interrupt()}}. At that point, I get a > cascade of logging from the above code and there's no easy to way to turn it > off. > "Log and throw" is generally frowned upon, just throw the {{Exception}} and > move on. > https://community.oracle.com/docs/DOC-983543#logAndThrow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16208) Do Not Log InterruptedException in Client
[ https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804369#comment-16804369 ] Steve Loughran commented on HADOOP-16208: - bq. java.io.InterruptedIOException cannot wrap another Exception and I don't think a new Exception should be thrown and lose the details of the original cause. This is what {{Throwable.initCause()}} is for https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java#L355 > Do Not Log InterruptedException in Client > - > > Key: HADOOP-16208 > URL: https://issues.apache.org/jira/browse/HADOOP-16208 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-16208.1.patch > > > {code:java} > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > LOG.warn("interrupted waiting to send rpc request to server", e); > throw new IOException(e); > } > {code} > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450 > I'm working on a project that uses an {{ExecutorService}} to launch a bunch > of threads. Each thread spins up an HDFS client connection. At any point in > time, the program can terminate and call {{ExecutorService#shutdownNow()}} to > forcibly close vis-à-vis {{Thread#interrupt()}}. At that point, I get a > cascade of logging from the above code and there's no easy to way to turn it > off. > "Log and throw" is generally frowned upon, just throw the {{Exception}} and > move on. > https://community.oracle.com/docs/DOC-983543#logAndThrow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16011) OsSecureRandom very slow compared to other SecureRandom implementations
[ https://issues.apache.org/jira/browse/HADOOP-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804354#comment-16804354 ] Siyao Meng commented on HADOOP-16011: - [~jojochuang] I see no reference of OsSecureRandom in core-default.xml, only OpensslAesCtrCryptoCodec here: {code:xml} hadoop.security.crypto.codec.classes.aes.ctr.nopadding org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, org.apache.hadoop.crypto.JceAesCtrCryptoCodec Comma-separated list of crypto codec implementations for AES/CTR/NoPadding. The first implementation will be used if available, others are fallbacks. {code} What change do we need here? > OsSecureRandom very slow compared to other SecureRandom implementations > --- > > Key: HADOOP-16011 > URL: https://issues.apache.org/jira/browse/HADOOP-16011 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Todd Lipcon >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16011.001.patch, MyBenchmark.java > > > In looking at performance of a workload which creates a lot of short-lived > remote connections to a secured DN, [~philip] and I found very high system > CPU usage. We tracked it down to reads from /dev/random, which are incurred > by the DN using CryptoCodec.generateSecureRandom to generate a transient > session key and IV for AES encryption. > In the case that the OpenSSL codec is not enabled, the above code falls > through to the JDK SecureRandom implementation, which performs reasonably. > However, OpenSSLCodec defaults to using OsSecureRandom, which reads all > random data from /dev/random rather than doing something more efficient like > initializing a CSPRNG from a small seed. > I wrote a simple JMH benchmark to compare various approaches when running > with concurrency 10: > testHadoop - using CryptoCodec > testNewSecureRandom - using 'new SecureRandom()' each iteration > testSha1PrngNew - using the SHA1PRNG explicitly, new instance each iteration > testSha1PrngShared - using a single shared instance of SHA1PRNG > testSha1PrngThread - using a thread-specific instance of SHA1PRNG > {code:java} > Benchmark Mode CntScore Error Units > MyBenchmark.testHadoop thrpt 1293.000 ops/s > [with libhadoop.so] > MyBenchmark.testHadoop thrpt461515.697 ops/s > [without libhadoop.so] > MyBenchmark.testNewSecureRandom thrpt 43413.640 ops/s > MyBenchmark.testSha1PrngNew thrpt395515.000 ops/s > MyBenchmark.testSha1PrngShared thrpt164488.713 ops/s > MyBenchmark.testSha1PrngThread thrpt 4295123.210 ops/s > {code} > In other words, the presence of the OpenSSL acceleration slows down this code > path by 356x. And, compared to the optimal (thread-local Sha1Prng) it's 3321x > slower. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
xiaoyuyao commented on a change in pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#discussion_r270218938 ## File path: hadoop-ozone/tools/pom.xml ## @@ -59,6 +59,18 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd";> hadoop-hdfs compile + + com.sun.xml.bind + jaxb-core + + + javax.xml.bind Review comment: This needs to be excluded from hadoop-common dependency like below: org.apache.hadoop hadoop-common compile com.sun.xml.bind jaxb-impl This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM
hanishakoneru commented on a change in pull request #651: HDDS-1339. Implement ratis snapshots on OM URL: https://github.com/apache/hadoop/pull/651#discussion_r270217752 ## File path: hadoop-hdds/common/src/main/resources/ozone-default.xml ## @@ -1603,18 +1603,27 @@ ozone.om.ratis.log.appender.queue.num-elements 1024 -OZONE, DEBUG, CONTAINER, RATIS +OZONE, DEBUG, OM, RATIS Number of operation pending with Raft's Log Worker. ozone.om.ratis.log.appender.queue.byte-limit 32MB -OZONE, DEBUG, CONTAINER, RATIS +OZONE, DEBUG, OM, RATIS Byte limit for Raft's Log Worker queue. + +ozone.om.ratis.snapshot.auto.trigger.threshold +40L Review comment: This is the default in Ratis so used that. I was thinking we can update it after extensive testing. But I am open to suggestions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components
[ https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804339#comment-16804339 ] Eric Yang commented on HADOOP-16214: Patch 1 supports multiple components by validating name format using JDK KerberosPrincipal class. It also keeping Hadoop service principal format intact by checking the [service]/[host]@[realm] and ensure host part is a FQDN string. > Kerberos name implementation in Hadoop does not accept principals with more > than two components > --- > > Key: HADOOP-16214 > URL: https://issues.apache.org/jira/browse/HADOOP-16214 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: Issac Buenrostro >Priority: Major > Attachments: HADOOP-16214.001.patch > > > org.apache.hadoop.security.authentication.util.KerberosName is in charge of > converting a Kerberos principal to a user name in Hadoop for all of the > services requiring authentication. > Although the Kerberos spec > ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html]) > allows for an arbitrary number of components in the principal, the Hadoop > implementation will throw a "Malformed Kerberos name:" error if the principal > has more than two components (because the regex can only read serviceName and > hostName). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #646: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite
hadoop-yetus commented on issue #646: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite URL: https://github.com/apache/hadoop/pull/646#issuecomment-477787273 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 0 | Docker mode activated. | | -1 | patch | 7 | https://github.com/apache/hadoop/pull/646 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/646 | | JIRA Issue | HADOOP-16085 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-646/6/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16085) S3Guard: use object version or etags to protect against inconsistent read after replace/overwrite
[ https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804338#comment-16804338 ] Hadoop QA commented on HADOOP-16085: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} https://github.com/apache/hadoop/pull/646 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | GITHUB PR | https://github.com/apache/hadoop/pull/646 | | JIRA Issue | HADOOP-16085 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-646/6/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. > S3Guard: use object version or etags to protect against inconsistent read > after replace/overwrite > - > > Key: HADOOP-16085 > URL: https://issues.apache.org/jira/browse/HADOOP-16085 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Ben Roling >Assignee: Ben Roling >Priority: Major > Attachments: HADOOP-16085-003.patch, HADOOP-16085_002.patch, > HADOOP-16085_3.2.0_001.patch > > > Currently S3Guard doesn't track S3 object versions. If a file is written in > S3A with S3Guard and then subsequently overwritten, there is no protection > against the next reader seeing the old version of the file instead of the new > one. > It seems like the S3Guard metadata could track the S3 object version. When a > file is created or updated, the object version could be written to the > S3Guard metadata. When a file is read, the read out of S3 could be performed > by object version, ensuring the correct version is retrieved. > I don't have a lot of direct experience with this yet, but this is my > impression from looking through the code. My organization is looking to > shift some datasets stored in HDFS over to S3 and is concerned about this > potential issue as there are some cases in our codebase that would do an > overwrite. > I imagine this idea may have been considered before but I couldn't quite > track down any JIRAs discussing it. If there is one, feel free to close this > with a reference to it. > Am I understanding things correctly? Is this idea feasible? Any feedback > that could be provided would be appreciated. We may consider crafting a > patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #648: HDDS-1340. Add List Containers API for Recon
hadoop-yetus commented on issue #648: HDDS-1340. Add List Containers API for Recon URL: https://github.com/apache/hadoop/pull/648#issuecomment-477785692 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 26 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1080 | trunk passed | | +1 | compile | 40 | trunk passed | | +1 | checkstyle | 15 | trunk passed | | +1 | mvnsite | 26 | trunk passed | | +1 | shadedclient | 742 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 37 | trunk passed | | +1 | javadoc | 19 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 32 | the patch passed | | +1 | compile | 21 | the patch passed | | +1 | javac | 21 | the patch passed | | -0 | checkstyle | 11 | hadoop-ozone/ozone-recon: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | | +1 | mvnsite | 26 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 844 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 47 | the patch passed | | +1 | javadoc | 19 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 34 | ozone-recon in the patch passed. | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 3146 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-648/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/648 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8508d8bea447 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4cceeb2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-648/5/artifact/out/diff-checkstyle-hadoop-ozone_ozone-recon.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-648/5/testReport/ | | Max. process+thread count | 305 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-648/5/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #656: HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu…
bharatviswa504 merged pull request #656: HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu… URL: https://github.com/apache/hadoop/pull/656 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #656: HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu…
bharatviswa504 commented on issue #656: HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu… URL: https://github.com/apache/hadoop/pull/656#issuecomment-477784196 +1 LGTM. I will commit this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components
[ https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated HADOOP-16214: --- Attachment: HADOOP-16214.001.patch > Kerberos name implementation in Hadoop does not accept principals with more > than two components > --- > > Key: HADOOP-16214 > URL: https://issues.apache.org/jira/browse/HADOOP-16214 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: Issac Buenrostro >Priority: Major > Attachments: HADOOP-16214.001.patch > > > org.apache.hadoop.security.authentication.util.KerberosName is in charge of > converting a Kerberos principal to a user name in Hadoop for all of the > services requiring authentication. > Although the Kerberos spec > ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html]) > allows for an arbitrary number of components in the principal, the Hadoop > implementation will throw a "Malformed Kerberos name:" error if the principal > has more than two components (because the regex can only read serviceName and > hostName). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
hadoop-yetus commented on issue #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660#issuecomment-477782897 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 31 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ ozone-0.4 Compile Tests _ | | +1 | mvninstall | 1048 | ozone-0.4 passed | | -1 | compile | 61 | tools in ozone-0.4 failed. | | -1 | mvnsite | 27 | tools in ozone-0.4 failed. | | +1 | shadedclient | 1778 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 18 | ozone-0.4 passed | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 22 | tools in the patch failed. | | -1 | compile | 25 | tools in the patch failed. | | -1 | javac | 25 | tools in the patch failed. | | -1 | mvnsite | 22 | tools in the patch failed. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 727 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 17 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 22 | tools in the patch failed. | | +1 | asflicense | 24 | The patch does not generate ASF License warnings. | | | | 2813 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/660 | | JIRA Issue | HDDS-1351 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 68aebaf3683f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | ozone-0.4 / f2dee89 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/branch-compile-hadoop-ozone_tools.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/branch-mvnsite-hadoop-ozone_tools.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-mvninstall-hadoop-ozone_tools.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-compile-hadoop-ozone_tools.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-compile-hadoop-ozone_tools.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-mvnsite-hadoop-ozone_tools.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-unit-hadoop-ozone_tools.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/testReport/ | | Max. process+thread count | 441 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components
[ https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated HADOOP-16214: --- Status: Patch Available (was: Open) > Kerberos name implementation in Hadoop does not accept principals with more > than two components > --- > > Key: HADOOP-16214 > URL: https://issues.apache.org/jira/browse/HADOOP-16214 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: Issac Buenrostro >Priority: Major > Attachments: HADOOP-16214.001.patch > > > org.apache.hadoop.security.authentication.util.KerberosName is in charge of > converting a Kerberos principal to a user name in Hadoop for all of the > services requiring authentication. > Although the Kerberos spec > ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html]) > allows for an arbitrary number of components in the principal, the Hadoop > implementation will throw a "Malformed Kerberos name:" error if the principal > has more than two components (because the regex can only read serviceName and > hostName). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #659: [HDDS-1351] NoClassDefFoundError when running ozone genconf
hadoop-yetus commented on issue #659: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/659#issuecomment-49668 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 27 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1027 | trunk passed | | +1 | compile | 58 | trunk passed | | +1 | mvnsite | 31 | trunk passed | | +1 | shadedclient | 1789 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 19 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 34 | the patch passed | | +1 | compile | 23 | the patch passed | | +1 | javac | 23 | the patch passed | | +1 | mvnsite | 23 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 685 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 17 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 63 | tools in the patch passed. | | +1 | asflicense | 25 | The patch does not generate ASF License warnings. | | | | 2821 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/659 | | JIRA Issue | HDDS-1351 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 0fe4c6c24cf7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4cceeb2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/testReport/ | | Max. process+thread count | 2388 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #656: HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu…
hadoop-yetus commented on issue #656: HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu… URL: https://github.com/apache/hadoop/pull/656#issuecomment-477766400 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 26 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1144 | trunk passed | | +1 | compile | 48 | trunk passed | | +1 | checkstyle | 21 | trunk passed | | +1 | mvnsite | 32 | trunk passed | | +1 | shadedclient | 762 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 54 | trunk passed | | +1 | javadoc | 27 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 36 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | +1 | checkstyle | 14 | hadoop-hdds/container-service: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 | mvnsite | 29 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 794 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 58 | the patch passed | | +1 | javadoc | 23 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 58 | container-service in the patch passed. | | +1 | asflicense | 24 | The patch does not generate ASF License warnings. | | | | 3265 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-656/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/656 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 77e8e53a25b3 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4cceeb2 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-656/1/testReport/ | | Max. process+thread count | 340 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-656/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #655: HADOOP-16218. Findbugs warning of null param in Configuration with Guava update.
hadoop-yetus commented on issue #655: HADOOP-16218. Findbugs warning of null param in Configuration with Guava update. URL: https://github.com/apache/hadoop/pull/655#issuecomment-477763876 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 60 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1386 | trunk passed | | +1 | compile | 1512 | trunk passed | | +1 | checkstyle | 68 | trunk passed | | +1 | mvnsite | 97 | trunk passed | | +1 | shadedclient | 978 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 126 | trunk passed | | +1 | javadoc | 77 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 61 | the patch passed | | +1 | compile | 1440 | the patch passed | | +1 | javac | 1440 | the patch passed | | +1 | checkstyle | 63 | the patch passed | | +1 | mvnsite | 92 | the patch passed | | +1 | whitespace | 1 | The patch has no whitespace issues. | | +1 | shadedclient | 778 | patch has no errors when building and testing our client artifacts. | | +1 | findbugs | 112 | the patch passed | | +1 | javadoc | 68 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 625 | hadoop-common in the patch passed. | | +1 | asflicense | 51 | The patch does not generate ASF License warnings. | | | | 7646 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-655/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/655 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8928a30bc7ac 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 49b02d4 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-655/1/testReport/ | | Max. process+thread count | 1356 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-655/1/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename
hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename URL: https://github.com/apache/hadoop/pull/654#issuecomment-477760405 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 28 | Docker mode activated. | ||| _ Prechecks _ | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 8 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 60 | Maven dependency ordering for branch | | +1 | mvninstall | 994 | trunk passed | | +1 | compile | 944 | trunk passed | | +1 | checkstyle | 196 | trunk passed | | +1 | mvnsite | 118 | trunk passed | | +1 | shadedclient | 1032 | branch has no errors when building and testing our client artifacts. | | +1 | findbugs | 136 | trunk passed | | +1 | javadoc | 81 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 20 | Maven dependency ordering for patch | | -1 | mvninstall | 24 | hadoop-aws in the patch failed. | | -1 | compile | 838 | root in the patch failed. | | -1 | javac | 838 | root in the patch failed. | | -0 | checkstyle | 185 | root: The patch generated 14 new + 40 unchanged - 0 fixed = 54 total (was 40) | | -1 | mvnsite | 38 | hadoop-aws in the patch failed. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 614 | patch has no errors when building and testing our client artifacts. | | -1 | findbugs | 32 | hadoop-aws in the patch failed. | | +1 | javadoc | 86 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 519 | hadoop-common in the patch failed. | | -1 | unit | 39 | hadoop-aws in the patch failed. | | +1 | asflicense | 36 | The patch does not generate ASF License warnings. | | | | 6197 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.token.delegation.TestZKDelegationTokenSecretManager | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/654 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 11a912ddd3a7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f3f5128 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/diff-checkstyle-root.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/testReport/ | | Max. process+thread count | 1688 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-654/2/console | | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai opened a new pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf
adoroszlai opened a new pull request #660: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/660 ## What changes were proposed in this pull request? Add `jaxb-core` and some `javax` artifacts to `hadoop-ozone-tools` dependencies to make `ozone genconf` work with JDK11, too. https://issues.apache.org/jira/browse/HDDS-1351 ## How was this patch tested? ``` $ mvn -Phdds -DskipTests -Dmaven.javadoc.skip=true -Pdist -Dtar -DskipShade -am -pl :hadoop-ozone-dist clean package $ cd $(git rev-parse --show-toplevel)/hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone $ docker-compose run datanode ozone genconf /tmp ozone-site.xml has been generated at /tmp ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai opened a new pull request #659: [HDDS-1351] NoClassDefFoundError when running ozone genconf
adoroszlai opened a new pull request #659: [HDDS-1351] NoClassDefFoundError when running ozone genconf URL: https://github.com/apache/hadoop/pull/659 ## What changes were proposed in this pull request? Add `jaxb-core` to `hadoop-ozone-tools` dependencies to make `ozone genconf` work again. https://issues.apache.org/jira/browse/HDDS-1351 ## How was this patch tested? ``` $ mvn -Phdds -DskipTests -Dmaven.javadoc.skip=true -Pdist -Dtar -DskipShade -am -pl :hadoop-ozone-dist clean package $ cd $(git rev-parse --show-toplevel)/hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozones3 $ docker-compose run datanode ozone genconf /tmp ozone-site.xml has been generated at /tmp ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16011) OsSecureRandom very slow compared to other SecureRandom implementations
[ https://issues.apache.org/jira/browse/HADOOP-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804267#comment-16804267 ] Hadoop QA commented on HADOOP-16011: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 36s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 90m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16011 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964057/HADOOP-16011.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b2114a3453b3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 49b02d4 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16082/testReport/ | | Max. process+thread count | 1451 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16082/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > OsSecureRand
[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804268#comment-16804268 ] Da Zhou commented on HADOOP-16219: -- +1 this will make backporting much easier. Regards, Da > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Priority: Major > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804254#comment-16804254 ] Steve Loughran commented on HADOOP-16219: - This is a more significant decision than a JIRA; needs to be discussed on the lists. Before worrying about that, let's see what happens with the builds. Closed the first PR as I managed to find the hadoop 3 -> java 8 patch and went with that one > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Priority: Major > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16219: Summary: [JDK8] Set minimum version of Hadoop 2 to JDK 8 (was: Hadoop branch-2 to set java language version to 1.8) > [JDK8] Set minimum version of Hadoop 2 to JDK 8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Priority: Major > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #658: HADOOP-16219. [JDK8] Set minimum version of Hadoop 2 to JDK 8.
steveloughran opened a new pull request #658: HADOOP-16219. [JDK8] Set minimum version of Hadoop 2 to JDK 8. URL: https://github.com/apache/hadoop/pull/658 Based on HADOOP-11858. Contributed by Robert Kanter. (cherry picked from commit 4b55642b9d836691592405805c181d12c2ed7e50) Change-Id: I18a58d5f50b84cb27e1bf1814e527a0c01e9782e This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16219) Hadoop branch-2 to set java language version to 1.8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804236#comment-16804236 ] Gabor Bota commented on HADOOP-16219: - +1 (non-binding) > Hadoop branch-2 to set java language version to 1.8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Priority: Major > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16219) Hadoop branch-2 to set java language version to 1.8
[ https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804234#comment-16804234 ] John Zhuge commented on HADOOP-16219: - +1 On Thu, Mar 28, 2019 at 12:33 PM Steve Loughran (JIRA) -- John > Hadoop branch-2 to set java language version to 1.8 > --- > > Key: HADOOP-16219 > URL: https://issues.apache.org/jira/browse/HADOOP-16219 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.10.0 >Reporter: Steve Loughran >Priority: Major > > Java 7 is long EOL; having branch-2 require it is simply making the release > process a pain (we aren't building, testing, or releasing on java 7 JVMs any > more, are we?). > Staying on java 7 complicates backporting, JAR updates for CVEs (hello > Guava!) &c are becoming impossible. > Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #657: HADOOP-16219. Hadoop branch-2 to set java language version to 1.8
steveloughran closed pull request #657: HADOOP-16219. Hadoop branch-2 to set java language version to 1.8 URL: https://github.com/apache/hadoop/pull/657 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #657: HADOOP-16219. Hadoop branch-2 to set java language version to 1.8
steveloughran opened a new pull request #657: HADOOP-16219. Hadoop branch-2 to set java language version to 1.8 URL: https://github.com/apache/hadoop/pull/657 Change-Id: Id085c144dc5f71b08ec86f497c51cc494957a664 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16219) Hadoop branch-2 to set java language version to 1.8
Steve Loughran created HADOOP-16219: --- Summary: Hadoop branch-2 to set java language version to 1.8 Key: HADOOP-16219 URL: https://issues.apache.org/jira/browse/HADOOP-16219 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.10.0 Reporter: Steve Loughran Java 7 is long EOL; having branch-2 require it is simply making the release process a pain (we aren't building, testing, or releasing on java 7 JVMs any more, are we?). Staying on java 7 complicates backporting, JAR updates for CVEs (hello Guava!) &c are becoming impossible. Proposed: increment javac.version = 1.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #638: HDDS-1309 . change logging from warn to debug in XceiverClient. Contr…
bharatviswa504 merged pull request #638: HDDS-1309 . change logging from warn to debug in XceiverClient. Contr… URL: https://github.com/apache/hadoop/pull/638 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #638: HDDS-1309 . change logging from warn to debug in XceiverClient. Contr…
bharatviswa504 commented on issue #638: HDDS-1309 . change logging from warn to debug in XceiverClient. Contr… URL: https://github.com/apache/hadoop/pull/638#issuecomment-477736817 +1 LGTM. I will commit this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao merged pull request #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
xiaoyuyao merged pull request #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao. URL: https://github.com/apache/hadoop/pull/641 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao opened a new pull request #656: HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu…
xiaoyuyao opened a new pull request #656: HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu… URL: https://github.com/apache/hadoop/pull/656 …ted by Xiaoyu Yao. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on issue #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
xiaoyuyao commented on issue #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao. URL: https://github.com/apache/hadoop/pull/641#issuecomment-477725418 Thanks for the review @ajayydv , the checkstyle issue is not introduced by this patch. I've opened a separate JIRA: https://issues.apache.org/jira/browse/HDDS-1350 so that we can get a clean cherry-pick for ozone-0.4 here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ajayydv edited a comment on issue #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
ajayydv edited a comment on issue #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao. URL: https://github.com/apache/hadoop/pull/641#issuecomment-477716691 +1 with checkstyle addressed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ajayydv commented on issue #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
ajayydv commented on issue #641: HDDS-1318. Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao. URL: https://github.com/apache/hadoop/pull/641#issuecomment-477716691 +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16208) Do Not Log InterruptedException in Client
[ https://issues.apache.org/jira/browse/HADOOP-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804171#comment-16804171 ] Giovanni Matteo Fumarola commented on HADOOP-16208: --- [^HADOOP-16208.1.patch] LGTM +1. I agree with David's comment. > Do Not Log InterruptedException in Client > - > > Key: HADOOP-16208 > URL: https://issues.apache.org/jira/browse/HADOOP-16208 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-16208.1.patch > > > {code:java} > } catch (InterruptedException e) { > Thread.currentThread().interrupt(); > LOG.warn("interrupted waiting to send rpc request to server", e); > throw new IOException(e); > } > {code} > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1450 > I'm working on a project that uses an {{ExecutorService}} to launch a bunch > of threads. Each thread spins up an HDFS client connection. At any point in > time, the program can terminate and call {{ExecutorService#shutdownNow()}} to > forcibly close vis-à-vis {{Thread#interrupt()}}. At that point, I get a > cascade of logging from the above code and there's no easy to way to turn it > off. > "Log and throw" is generally frowned upon, just throw the {{Exception}} and > move on. > https://community.oracle.com/docs/DOC-983543#logAndThrow -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16011) OsSecureRandom very slow compared to other SecureRandom implementations
[ https://issues.apache.org/jira/browse/HADOOP-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804166#comment-16804166 ] Wei-Chiu Chuang commented on HADOOP-16011: -- I think you'll also need to update core-default.xml also, update the release note too. > OsSecureRandom very slow compared to other SecureRandom implementations > --- > > Key: HADOOP-16011 > URL: https://issues.apache.org/jira/browse/HADOOP-16011 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Todd Lipcon >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16011.001.patch, MyBenchmark.java > > > In looking at performance of a workload which creates a lot of short-lived > remote connections to a secured DN, [~philip] and I found very high system > CPU usage. We tracked it down to reads from /dev/random, which are incurred > by the DN using CryptoCodec.generateSecureRandom to generate a transient > session key and IV for AES encryption. > In the case that the OpenSSL codec is not enabled, the above code falls > through to the JDK SecureRandom implementation, which performs reasonably. > However, OpenSSLCodec defaults to using OsSecureRandom, which reads all > random data from /dev/random rather than doing something more efficient like > initializing a CSPRNG from a small seed. > I wrote a simple JMH benchmark to compare various approaches when running > with concurrency 10: > testHadoop - using CryptoCodec > testNewSecureRandom - using 'new SecureRandom()' each iteration > testSha1PrngNew - using the SHA1PRNG explicitly, new instance each iteration > testSha1PrngShared - using a single shared instance of SHA1PRNG > testSha1PrngThread - using a thread-specific instance of SHA1PRNG > {code:java} > Benchmark Mode CntScore Error Units > MyBenchmark.testHadoop thrpt 1293.000 ops/s > [with libhadoop.so] > MyBenchmark.testHadoop thrpt461515.697 ops/s > [without libhadoop.so] > MyBenchmark.testNewSecureRandom thrpt 43413.640 ops/s > MyBenchmark.testSha1PrngNew thrpt395515.000 ops/s > MyBenchmark.testSha1PrngShared thrpt164488.713 ops/s > MyBenchmark.testSha1PrngThread thrpt 4295123.210 ops/s > {code} > In other words, the presence of the OpenSSL acceleration slows down this code > path by 356x. And, compared to the optimal (thread-local Sha1Prng) it's 3321x > slower. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16195) S3A MarshalledCredentials.toString() doesn't print full date/time of expiry
[ https://issues.apache.org/jira/browse/HADOOP-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804151#comment-16804151 ] Hudson commented on HADOOP-16195: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16299 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16299/]) HADOOP-16195 MarshalledCredentials toString (stevel: rev df578c07ecf354d87aa97e3ace47099e2ffea9d7) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/MarshalledCredentials.java > S3A MarshalledCredentials.toString() doesn't print full date/time of expiry > --- > > Key: HADOOP-16195 > URL: https://issues.apache.org/jira/browse/HADOOP-16195 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.3.0 > > > When you print MarshalledCredentials with session credentials you get the > expiry date, but not the time: > AWS Credentials=session credentials, expiry 2019-03-15Z; > Issue: we use ISO_DATE instead of ISO_DATE_TIME to format the time. Fix: > change -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16186) S3Guard: NPE in DynamoDBMetadataStore.lambda$listChildren
[ https://issues.apache.org/jira/browse/HADOOP-16186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804149#comment-16804149 ] Hudson commented on HADOOP-16186: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16299 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16299/]) HADOOP-16186. S3Guard: NPE in DynamoDBMetadataStore.lambda$listChildren. (stevel: rev cfb01869038065defe50ab53d4d1eda4e6cdee33) * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestDynamoDBMiscOperations.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java > S3Guard: NPE in DynamoDBMetadataStore.lambda$listChildren > - > > Key: HADOOP-16186 > URL: https://issues.apache.org/jira/browse/HADOOP-16186 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0, 3.1.2 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > Fix For: 3.3.0 > > > Test run options. NPE in test teardown > {code} > -Dparallel-tests -DtestsThreadCount=6 -Ds3guard -Ddynamodb > {code} > If you look at the code, its *exactly* the place fixed in HADOOP-15827, a > change which HADOOP-15947 reverted. > There's clearly some codepath which can surface which is causing failures in > some situations, and having multiple patches switching between the && and || > operators isn't going to to fix it -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations
[ https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804150#comment-16804150 ] Hudson commented on HADOOP-15999: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16299 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16299/]) HADOOP-15999. S3Guard: Better support for out-of-band operations. (stevel: rev b5db2383832881034d57d836a8135a07a2bd1cf4) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetFileStatusTest.java > S3Guard: Better support for out-of-band operations > -- > > Key: HADOOP-15999 > URL: https://issues.apache.org/jira/browse/HADOOP-15999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Sean Mackrory >Assignee: Gabor Bota >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, > HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, > HADOOP-15999.005.patch, HADOOP-15999.006.patch, HADOOP-15999.008.patch, > HADOOP-15999.009.patch, out-of-band-operations.patch > > > S3Guard was initially done on the premise that a new MetadataStore would be > the source of truth, and that it wouldn't provide guarantees if updates were > done without using S3Guard. > I've been seeing increased demand for better support for scenarios where > operations are done on the data that can't reasonably be done with S3Guard > involved. For example: > * A file is deleted using S3Guard, and replaced by some other tool. S3Guard > can't tell the difference between the new file and delete / list > inconsistency and continues to treat the file as deleted. > * An S3Guard-ed file is overwritten by a longer file by some other tool. When > reading the file, only the length of the original file is read. > We could possibly have smarter behavior here by querying both S3 and the > MetadataStore (even in cases where we may currently only query the > MetadataStore in getFileStatus) and use whichever one has the higher modified > time. > This kills the performance boost we currently get in some workloads with the > short-circuited getFileStatus, but we could keep it with authoritative mode > which should give a larger performance boost. At least we'd get more > correctness without authoritative mode and a clear declaration of when we can > make the assumptions required to short-circuit the process. If we can't > consider S3Guard the source of truth, we need to defer to S3 more. > We'd need to be extra sure of any locality / time zone issues if we start > relying on mod_time more directly, but currently we're tracking the > modification time as returned by S3 anyway. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org