[jira] [Commented] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance
[ https://issues.apache.org/jira/browse/HADOOP-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035994#comment-17035994 ] Tao Yang commented on HADOOP-16850: --- Thanks [~aajisaka] for your suggestion. There are dozens of places across many modules that need to be updated if set conf in the callers, so I prefer to call {{new Configuration}} in JvmMetrics#create as you suggested. Attached v2 patch for review. > Support getting thread info from thread group for JvmMetrics to improve the > performance > --- > > Key: HADOOP-16850 > URL: https://issues.apache.org/jira/browse/HADOOP-16850 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1 >Reporter: Tao Yang >Priority: Major > Attachments: HADOOP-16850.001.patch, HADOOP-16850.002.patch > > > Recently we found jmx request taken almost 5s+ to be done when there were 1w+ > threads in a stressed datanode process, meanwhile other http requests were > blocked and some disk operations were affected (we can see many "Slow > manageWriterOsCache" messages in DN log, and these messages were hard to be > seen again after we stopped sending jxm requests) > The excessive time is spent in getting thread info via ThreadMXBean inside > which ThreadImpl#getThreadInfo native method is called, the time complexity > of ThreadImpl#getThreadInfo is O(n*n) according to > [JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds > global thread lock and prevents creation or termination of threads. > To improve this, I propose to support getting thread info from thread group > which will improve a lot by default, also support using original approach > when "-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the > startup command. > An example of performance tests between these two approaches is as follows: > {noformat} > #Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5 > #Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9 > #Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15 > #Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42 > #Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92 > #Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance
[ https://issues.apache.org/jira/browse/HADOOP-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Yang updated HADOOP-16850: -- Attachment: HADOOP-16850.002.patch > Support getting thread info from thread group for JvmMetrics to improve the > performance > --- > > Key: HADOOP-16850 > URL: https://issues.apache.org/jira/browse/HADOOP-16850 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1 >Reporter: Tao Yang >Priority: Major > Attachments: HADOOP-16850.001.patch, HADOOP-16850.002.patch > > > Recently we found jmx request taken almost 5s+ to be done when there were 1w+ > threads in a stressed datanode process, meanwhile other http requests were > blocked and some disk operations were affected (we can see many "Slow > manageWriterOsCache" messages in DN log, and these messages were hard to be > seen again after we stopped sending jxm requests) > The excessive time is spent in getting thread info via ThreadMXBean inside > which ThreadImpl#getThreadInfo native method is called, the time complexity > of ThreadImpl#getThreadInfo is O(n*n) according to > [JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds > global thread lock and prevents creation or termination of threads. > To improve this, I propose to support getting thread info from thread group > which will improve a lot by default, also support using original approach > when "-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the > startup command. > An example of performance tests between these two approaches is as follows: > {noformat} > #Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5 > #Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9 > #Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15 > #Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42 > #Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92 > #Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance
[ https://issues.apache.org/jira/browse/HADOOP-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035868#comment-17035868 ] Akira Ajisaka commented on HADOOP-16850: Thank [~Tao Yang] for the reply. I think that is ok. If it is difficult to set conf in the callers, it's fine to call {{new Configuration()}} in JvmMetrics#create. {{new Configuration()}} takes some time, but it is called at most once and the overall cost is not so expensive. > Support getting thread info from thread group for JvmMetrics to improve the > performance > --- > > Key: HADOOP-16850 > URL: https://issues.apache.org/jira/browse/HADOOP-16850 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1 >Reporter: Tao Yang >Priority: Major > Attachments: HADOOP-16850.001.patch > > > Recently we found jmx request taken almost 5s+ to be done when there were 1w+ > threads in a stressed datanode process, meanwhile other http requests were > blocked and some disk operations were affected (we can see many "Slow > manageWriterOsCache" messages in DN log, and these messages were hard to be > seen again after we stopped sending jxm requests) > The excessive time is spent in getting thread info via ThreadMXBean inside > which ThreadImpl#getThreadInfo native method is called, the time complexity > of ThreadImpl#getThreadInfo is O(n*n) according to > [JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds > global thread lock and prevents creation or termination of threads. > To improve this, I propose to support getting thread info from thread group > which will improve a lot by default, also support using original approach > when "-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the > startup command. > An example of performance tests between these two approaches is as follows: > {noformat} > #Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5 > #Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9 > #Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15 > #Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42 > #Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92 > #Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance
[ https://issues.apache.org/jira/browse/HADOOP-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035857#comment-17035857 ] Tao Yang commented on HADOOP-16850: --- Thanks [~aajisaka] for the review. {quote}Would you add a Hadoop parameter instead of system property? That way we can set the parameter in core-site.xml instead of using -D option. {quote} I have considered before and thought that it may need to import the parameter from outside with updating JvmMetrics#, JvmMetrics#create and all their callers across 4 modules, I would like to update the patch if that is ok. > Support getting thread info from thread group for JvmMetrics to improve the > performance > --- > > Key: HADOOP-16850 > URL: https://issues.apache.org/jira/browse/HADOOP-16850 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1 >Reporter: Tao Yang >Priority: Major > Attachments: HADOOP-16850.001.patch > > > Recently we found jmx request taken almost 5s+ to be done when there were 1w+ > threads in a stressed datanode process, meanwhile other http requests were > blocked and some disk operations were affected (we can see many "Slow > manageWriterOsCache" messages in DN log, and these messages were hard to be > seen again after we stopped sending jxm requests) > The excessive time is spent in getting thread info via ThreadMXBean inside > which ThreadImpl#getThreadInfo native method is called, the time complexity > of ThreadImpl#getThreadInfo is O(n*n) according to > [JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds > global thread lock and prevents creation or termination of threads. > To improve this, I propose to support getting thread info from thread group > which will improve a lot by default, also support using original approach > when "-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the > startup command. > An example of performance tests between these two approaches is as follows: > {noformat} > #Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5 > #Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9 > #Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15 > #Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42 > #Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92 > #Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance
[ https://issues.apache.org/jira/browse/HADOOP-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035847#comment-17035847 ] Akira Ajisaka commented on HADOOP-16850: Thanks [~Tao Yang] for the report and providing the patch. It looks interesting. Would you add a Hadoop parameter instead of system property? That way we can set the parameter in core-site.xml instead of using -D option. > Support getting thread info from thread group for JvmMetrics to improve the > performance > --- > > Key: HADOOP-16850 > URL: https://issues.apache.org/jira/browse/HADOOP-16850 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1 >Reporter: Tao Yang >Priority: Major > Attachments: HADOOP-16850.001.patch > > > Recently we found jmx request taken almost 5s+ to be done when there were 1w+ > threads in a stressed datanode process, meanwhile other http requests were > blocked and some disk operations were affected (we can see many "Slow > manageWriterOsCache" messages in DN log, and these messages were hard to be > seen again after we stopped sending jxm requests) > The excessive time is spent in getting thread info via ThreadMXBean inside > which ThreadImpl#getThreadInfo native method is called, the time complexity > of ThreadImpl#getThreadInfo is O(n*n) according to > [JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds > global thread lock and prevents creation or termination of threads. > To improve this, I propose to support getting thread info from thread group > which will improve a lot by default, also support using original approach > when "-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the > startup command. > An example of performance tests between these two approaches is as follows: > {noformat} > #Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5 > #Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9 > #Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15 > #Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42 > #Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92 > #Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance
[ https://issues.apache.org/jira/browse/HADOOP-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035835#comment-17035835 ] Tao Yang commented on HADOOP-16850: --- Hi, [~aajisaka], could you please take a look at this issue? Thanks. > Support getting thread info from thread group for JvmMetrics to improve the > performance > --- > > Key: HADOOP-16850 > URL: https://issues.apache.org/jira/browse/HADOOP-16850 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1 >Reporter: Tao Yang >Priority: Major > Attachments: HADOOP-16850.001.patch > > > Recently we found jmx request taken almost 5s+ to be done when there were 1w+ > threads in a stressed datanode process, meanwhile other http requests were > blocked and some disk operations were affected (we can see many "Slow > manageWriterOsCache" messages in DN log, and these messages were hard to be > seen again after we stopped sending jxm requests) > The excessive time is spent in getting thread info via ThreadMXBean inside > which ThreadImpl#getThreadInfo native method is called, the time complexity > of ThreadImpl#getThreadInfo is O(n*n) according to > [JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds > global thread lock and prevents creation or termination of threads. > To improve this, I propose to support getting thread info from thread group > which will improve a lot by default, also support using original approach > when "-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the > startup command. > An example of performance tests between these two approaches is as follows: > {noformat} > #Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5 > #Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9 > #Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15 > #Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42 > #Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92 > #Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router
aajisaka commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router URL: https://github.com/apache/hadoop/pull/1832#issuecomment-585497638 Merged. Thanks @goiri for your reviews. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #1832: HDFS-13989. RBF: Add FSCK to the Router
aajisaka merged pull request #1832: HDFS-13989. RBF: Add FSCK to the Router URL: https://github.com/apache/hadoop/pull/1832 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1845: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
hadoop-yetus commented on issue #1845: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1845#issuecomment-585428044 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 26m 1s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 26s | trunk passed | | +1 :green_heart: | compile | 0m 35s | trunk passed | | +1 :green_heart: | checkstyle | 0m 29s | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | trunk passed | | +1 :green_heart: | shadedclient | 14m 48s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed | | +0 :ok: | spotbugs | 1m 1s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed | | -1 :x: | javac | 0m 27s | hadoop-tools_hadoop-aws generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -0 :warning: | checkstyle | 0m 20s | hadoop-tools/hadoop-aws: The patch generated 3 new + 26 unchanged - 0 fixed = 29 total (was 26) | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 46s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed | | +1 :green_heart: | findbugs | 1m 3s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 28s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 84m 4s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1845 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint xml | | uname | Linux d470c8c306b1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f09710b | | Default Java | 1.8.0_242 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/1/artifact/out/diff-compile-javac-hadoop-tools_hadoop-aws.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/1/testReport/ | | Max. process+thread count | 401 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1845: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
hadoop-yetus commented on issue #1845: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1845#issuecomment-585421681 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 13s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 7s | trunk passed | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 26s | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | trunk passed | | +1 :green_heart: | shadedclient | 16m 33s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | trunk passed | | +0 :ok: | spotbugs | 0m 58s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 57s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | -1 :x: | javac | 0m 28s | hadoop-tools_hadoop-aws generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -0 :warning: | checkstyle | 0m 18s | hadoop-tools/hadoop-aws: The patch generated 3 new + 26 unchanged - 0 fixed = 29 total (was 26) | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 26s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | | +1 :green_heart: | findbugs | 1m 1s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 26s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 64m 40s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1845 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint xml | | uname | Linux e117e7e088f9 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f09710b | | Default Java | 1.8.0_232 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/2/artifact/out/diff-compile-javac-hadoop-tools_hadoop-aws.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/2/testReport/ | | Max. process+thread count | 335 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1845/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1845: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
steveloughran commented on issue #1845: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1845#issuecomment-585398805 new exception breaks a few tests ``` INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.411 s - in org.apache.hadoop.fs.s3a.ITestS3AMetadataPersistenceException [ERROR] Tests run: 28, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 103.507 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB [ERROR] testCLIFsckFailInitializeFs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB) Time elapsed: 2.006 s <<< ERROR! org.apache.hadoop.fs.s3a.UnknownStoreException: s3a://this-bucket-does-not-exist-dbdfaae0-e22c-4b64-a12e-4b1f31da43ba/ at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:255) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:167) at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2971) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2812) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2696) at org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck.compareS3ToMs(S3GuardFsck.java:117) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Fsck.run(S3GuardTool.java:1697) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:480) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:2002) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:154) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.lambda$testCLIFsckFailInitializeFs$5(ITestS3GuardToolDynamoDB.java:323) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:498) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:453) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testCLIFsckFailInitializeFs(ITestS3GuardToolDynamoDB.java:322) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucket; Request ID: 65AB7E8F8BBDD54C; S3 Extended Request ID: s2eT1WDJIduDI+GtOOaJnBMjQsekFL3UcXvTtMPgrXQQ4qdueS2ApAxa0GBPvkSYo0lUbNGYVT8=), S3 Extended Request ID: s2eT1WDJIduDI+GtOOaJnBMjQsekFL3UcXvTtMPgrXQQ4qdueS2ApAxa0GBPvkSYo0lUbNGYVT8= at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) at com.amazonaws.http.AmazonHttpClient.execute(AmazonH
[GitHub] [hadoop] steveloughran opened a new pull request #1845: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
steveloughran opened a new pull request #1845: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1845 see #1838 Adds a new exception UnknownStoreException to indicate "there's no store there" * raised in verify bucket existence checks * and when translating AWS exceptions into IOEs * The S3A retry policy fails fast on this * And s3GetFileStatus recognises the same failure and raises it Except when the metastore shortcircuits S3 IO, this means all operations against a nonexistent store will fail with a unique exception. ITestS3ABucketExistence is extended to * disable metastore (getFileStatus(/) was returning a value) * always create new instances * invoke all the operations which catch and swallow FNFEs (exists, isFile, isDir, delete) also: disable the probe for landsat-pds so that we get more test coverage of the option. Tested: S3 ireland w/ DDB, and the probe set to 0 by default, everywhere (No obvious speedup...) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
hadoop-yetus commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#issuecomment-585379660 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 18s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 41s | trunk passed | | +1 :green_heart: | compile | 0m 27s | trunk passed | | +1 :green_heart: | checkstyle | 0m 21s | trunk passed | | +1 :green_heart: | mvnsite | 0m 30s | trunk passed | | +1 :green_heart: | shadedclient | 16m 35s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 56s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | -0 :warning: | checkstyle | 0m 16s | hadoop-tools/hadoop-azure: The patch generated 40 new + 9 unchanged - 0 fixed = 49 total (was 9) | | +1 :green_heart: | mvnsite | 0m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 22s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 21s | hadoop-tools_hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | -1 :x: | findbugs | 0m 56s | hadoop-tools/hadoop-azure generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 17s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | The patch does not generate ASF License warnings. | | | | 64m 52s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-azure | | | Unused public or protected field:org.apache.hadoop.fs.azurebfs.extensions.AuthorizationResourceResult.authorizerAction In AuthorizationResourceResult.java | | | Unwritten public or protected field:org.apache.hadoop.fs.azurebfs.extensions.AuthorizationResourceResult.authToken At AuthorizationStatus.java:[line 78] | | | Unwritten public or protected field:org.apache.hadoop.fs.azurebfs.extensions.AuthorizationResourceResult.storePathUri At AuthorizationStatus.java:[line 91] | | | org.apache.hadoop.fs.azurebfs.extensions.AuthorizationResult.getAuthResourceResult() may expose internal representation by returning AuthorizationResult.authResourceResult At AuthorizationResult.java:by returning AuthorizationResult.authResourceResult At AuthorizationResult.java:[line 40] | | | org.apache.hadoop.fs.azurebfs.extensions.AuthorizationResult.setAuthResourceResult(AuthorizationResourceResult[]) may expose internal representation by storing an externally mutable object into AuthorizationResult.authResourceResult At AuthorizationResult.java:by storing an externally mutable object into AuthorizationResult.authResourceResult At AuthorizationResult.java:[line 45] | | | Possible null pointer dereference of relativePath in org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(String, boolean, int, String) Dereferenced at AbfsClient.java:relativePath in org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(String, boolean, int, String) Dereferenced at AbfsClient.java:[line 225] | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1842 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2cd01f55bef9 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f09710b | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1842/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | javadoc | htt
[GitHub] [hadoop] hadoop-yetus commented on issue #1844: HADOOP-16706. ITestClientUrlScheme fails for accounts which don't support HTTP
hadoop-yetus commented on issue #1844: HADOOP-16706. ITestClientUrlScheme fails for accounts which don't support HTTP URL: https://github.com/apache/hadoop/pull/1844#issuecomment-585376509 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 30m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 36s | trunk passed | | +1 :green_heart: | compile | 0m 27s | trunk passed | | +1 :green_heart: | checkstyle | 0m 22s | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | trunk passed | | +1 :green_heart: | shadedclient | 17m 39s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed | | +0 :ok: | spotbugs | 1m 11s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 8s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 26s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 17m 26s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | | +1 :green_heart: | findbugs | 1m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 29s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 98m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1844/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1844 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d9fb28d9d39b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f09710b | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1844/1/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1844/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
steveloughran commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1838#issuecomment-585351633 Also once I fix that by adding a trailing /, the getFileStatus("/") fails to raise an FNFE, which is because S3Guard is enabled for all buckets on my test setup, *and s3guard DDB will create a stub FS Status on a root entry*. ``` java.lang.AssertionError: Expected a java.io.FileNotFoundException to be thrown, but got the result: : S3AFileStatus{path=s3a://random-bucket-11df1b68-2535-4a9b-9fd5-c6a4d5a6c192/; isDirectory=true; modification_time=0; access_time=0; owner=stevel; group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null versionId=null at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:499) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:453) at org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.testNoBucketProbing(ITestS3ABucketExistence.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) at java.util.concurrent.FutureTask.run(FutureTask.java) at java.lang.Thread.run(Thread.java:748) ``` We could consider shortcutting some of the getFileStatus queries against / in S3A FS itself -it's always a dir after all This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
snvijaya commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#issuecomment-585350897 Test results for account without namespace enabled (East US 2): [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Tests run: 415, Failures: 0, Errors: 0, Skipped: 244 [WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 128 Test results for account with namespace enabled (East US 2): [ERROR] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Tests run: 415, Failures: 0, Errors: 0, Skipped: 36 [WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 128 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16730) ABFS: Support for Shared Access Signatures (SAS)
[ https://issues.apache.org/jira/browse/HADOOP-16730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan updated HADOOP-16730: --- Status: Patch Available (was: Open) > ABFS: Support for Shared Access Signatures (SAS) > > > Key: HADOOP-16730 > URL: https://issues.apache.org/jira/browse/HADOOP-16730 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Thomas Marqardt >Assignee: Sneha Vijayarajan >Priority: Major > Original Estimate: 1,008h > Remaining Estimate: 1,008h > > ABFS supports OAuth and Shared Key but currently lacks support for [Shared > Access Signatures > (SAS)|[https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview]]. > SAS is a great way to constrain access to a low-privilege ABFS client. The > ABFS client does not need to possess persistent credentials for accessing > storage but instead can request temporary, constrained access tokens from a > trusted endpoint. This endpoint can authenticate the caller, make an > authorization decision and return a constrained SAS token. The token may > have an expiration, it may be scoped to a specific file or directory, and it > may grant an action or set of actions such as read, write, list, or delete. > Azure Storage also has a new identity based SAS scheme in preview named > Delegation SAS. These new Delegation SAS have these advantages over Service > SAS: > 1) Delegation SAS provide authentication as well as authorization. The user > identity associated with each request will appear in the logs when logging is > enabled for the account. > 2) Instead of using storage account keys to sign tokens, Delegation SAS > relies on keys assigned to each user. These keys are called user delegation > keys. If a storage account key is leaked, an attacker would have full access > to the storage account. If a user delegation key is leaked, an attacker > would only have access to resources that user has access to within the Blob > service–for example, the user might only have read access to a specific > container. > This feature will add support for the ABFS driver to authenticate against a > trusted endpoint. The endpoint will return a SAS which the ABFS driver will > use to access Azure storage. The SAS may be a container or directory SAS to > be used for all subsequent operations, and thus cached for the lifetime of > the filesystem. Or it may be a SAS to be used for the current filesystem > operation, in this case, the ABFS driver will request a SAS for each > operation. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
steveloughran commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1838#issuecomment-585343180 testing myself, setting validation to 0 for the entire test run to field test it better. one failure so far in the new test ``` [INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardTtl [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.739 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3ABucketExistence [ERROR] testNoBucketProbing(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence) Time elapsed: 1.258 s <<< ERROR! java.lang.IllegalArgumentException: Path s3a://random-bucket-442f6634-4892-4239-bd8c-ac5a2b0a3700 is not absolute at com.google.common.base.Preconditions.checkArgument(Preconditions.java:216) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.checkPath(DynamoDBMetadataStore.java:1851) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:718) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:205) at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getWithTtl(S3Guard.java:900) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2729) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2696) at org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.lambda$testNoBucketProbing$0(ITestS3ABucketExistence.java:66) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:498) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:453) at org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.testNoBucketProbing(ITestS3ABucketExistence.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) `` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
liuml07 commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets URL: https://github.com/apache/hadoop/pull/1840#discussion_r378422262 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java ## @@ -969,16 +970,42 @@ private void deleteFileInListing() deleteFile(rawFS, testFilePath); // File status will be still readable from s3guard - FileStatus status = guardedFs.getFileStatus(testFilePath); + S3AFileStatus status = (S3AFileStatus) + guardedFs.getFileStatus(testFilePath); LOG.info("authoritative: {} status: {}", allowAuthoritative, status); - expectExceptionWhenReading(testFilePath, text); - expectExceptionWhenReadingOpenFileAPI(testFilePath, text, null); - expectExceptionWhenReadingOpenFileAPI(testFilePath, text, status); + if (isVersionedChangeDetection() && status.getVersionId() != null) { +// when the status entry has a version ID, then that may be used +// when opening the file on what is clearly a versioned store. +int length = text.length(); +byte[] bytes = readOpenFileAPI(guardedFs, testFilePath, length, null); +Assertions.assertThat(toChar(bytes)) +.describedAs("openFile(%s)", testFilePath) +.isEqualTo(text); +// reading the rawFS with status will also work. +bytes = readOpenFileAPI(rawFS, testFilePath, length, status); Review comment: > bytes = readOpenFileAPI(rawFS, testFilePath, length, null); The file was deleted and I was expecting FNF exception here. I have not tested here with versioned bucket. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
liuml07 commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets URL: https://github.com/apache/hadoop/pull/1840#discussion_r378421141 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java ## @@ -969,16 +970,42 @@ private void deleteFileInListing() deleteFile(rawFS, testFilePath); // File status will be still readable from s3guard - FileStatus status = guardedFs.getFileStatus(testFilePath); + S3AFileStatus status = (S3AFileStatus) + guardedFs.getFileStatus(testFilePath); LOG.info("authoritative: {} status: {}", allowAuthoritative, status); - expectExceptionWhenReading(testFilePath, text); - expectExceptionWhenReadingOpenFileAPI(testFilePath, text, null); - expectExceptionWhenReadingOpenFileAPI(testFilePath, text, status); + if (isVersionedChangeDetection() && status.getVersionId() != null) { Review comment: This is very specific to this test, which is not too long to me; and we don't need to pass some parameters to the new helper method. I’m happy it sits here or a new method. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1838#discussion_r378419167 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.net.URI; +import java.util.UUID; + +import org.junit.After; +import org.junit.Test; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.io.IOUtils; + +import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset; +import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset; +import static org.apache.hadoop.fs.s3a.Constants.FS_S3A; +import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE; +import static org.apache.hadoop.test.LambdaTestUtils.intercept; + +/** + * Class to test bucket existence api. + * See {@link S3AFileSystem#doBucketProbing()}. + */ +public class ITestS3ABucketExistence extends AbstractS3ATestBase { + + private FileSystem fs; + + private final String randomBucket = + "random-bucket-" + UUID.randomUUID().toString(); + + private final URI uri = URI.create(FS_S3A + "://" + randomBucket); + + @Test + public void testNoBucketProbing() throws Exception { +Configuration configuration = getConfiguration(); +configuration.setInt(S3A_BUCKET_PROBE, 0); +try { + fs = FileSystem.get(uri, configuration); +} catch (IOException ex) { + LOG.error("Exception : ", ex); + throw ex; +} + +Path path = new Path(uri); +intercept(FileNotFoundException.class, +"No such file or directory: " + path, +() -> fs.getFileStatus(path)); + +Path src = new Path(fs.getUri() + "/testfile"); +byte[] data = dataset(1024, 'a', 'z'); +intercept(FileNotFoundException.class, +"The specified bucket does not exist", +() -> writeDataset(fs, src, data, data.length, 1024 * 1024, true)); + } + + @Test + public void testBucketProbingV1() throws Exception { +Configuration configuration = getConfiguration(); +configuration.setInt(S3A_BUCKET_PROBE, 1); +intercept(FileNotFoundException.class, +() -> FileSystem.get(uri, configuration)); + } + + @Test + public void testBucketProbingV2() throws Exception { +Configuration configuration = getConfiguration(); +configuration.setInt(S3A_BUCKET_PROBE, 2); +intercept(FileNotFoundException.class, +() -> FileSystem.get(uri, configuration)); + } + + @Test + public void testBucketProbingParameterValidation() throws Exception { +Configuration configuration = getConfiguration(); +configuration.setInt(S3A_BUCKET_PROBE, 3); +intercept(IllegalArgumentException.class, +"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2", +"Should throw IllegalArgumentException", +() -> FileSystem.get(uri, configuration)); +configuration.setInt(S3A_BUCKET_PROBE, -1); +intercept(IllegalArgumentException.class, +"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2", +"Should throw IllegalArgumentException", +() -> FileSystem.get(uri, configuration)); + } + + @Override + protected Configuration getConfiguration() { +Configuration configuration = super.getConfiguration(); +S3ATestUtils.disableFilesystemCaching(configuration); +return configuration; + } + + @After + public void tearDown() throws Exception { +IOUtils.cleanupWithLogger(getLogger(), fs); Review comment: no worries This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services ---
[GitHub] [hadoop] steveloughran opened a new pull request #1844: HADOOP-16706. ITestClientUrlScheme fails for accounts which don't support HTTP
steveloughran opened a new pull request #1844: HADOOP-16706. ITestClientUrlScheme fails for accounts which don't support HTTP URL: https://github.com/apache/hadoop/pull/1844 Adds a new service code to recognise accounts without HTTP support; catches that and considers that a successful validation of the ability of the client to switch to http when the test parameters expect that. Tested: Azure cardiff, with a storage account which doesn't support HTTP This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-16706) ITestClientUrlScheme fails for accounts which don't support HTTP
[ https://issues.apache.org/jira/browse/HADOOP-16706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-16706 started by Steve Loughran. --- > ITestClientUrlScheme fails for accounts which don't support HTTP > > > Key: HADOOP-16706 > URL: https://issues.apache.org/jira/browse/HADOOP-16706 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > I'm setting up a new Storage account to Play with encryption options. I'm > getting a test failure in > {{testClientUrlScheme[0](org.apache.hadoop.fs.azurebfs.ITestClientUrlScheme)}} > as it doesn't support HTTP. > Proposed: catch, recognise 'AccountRequiresHttps' and downgrade those > particular parameterised tests to skipped tests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16706) ITestClientUrlScheme fails for accounts which don't support HTTP
[ https://issues.apache.org/jira/browse/HADOOP-16706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16706: --- Assignee: Steve Loughran (was: Gabor Bota) > ITestClientUrlScheme fails for accounts which don't support HTTP > > > Key: HADOOP-16706 > URL: https://issues.apache.org/jira/browse/HADOOP-16706 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > I'm setting up a new Storage account to Play with encryption options. I'm > getting a test failure in > {{testClientUrlScheme[0](org.apache.hadoop.fs.azurebfs.ITestClientUrlScheme)}} > as it doesn't support HTTP. > Proposed: catch, recognise 'AccountRequiresHttps' and downgrade those > particular parameterised tests to skipped tests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16859) ABFS: Add unbuffer support to AbfsInputStream
Sahil Takiar created HADOOP-16859: - Summary: ABFS: Add unbuffer support to AbfsInputStream Key: HADOOP-16859 URL: https://issues.apache.org/jira/browse/HADOOP-16859 Project: Hadoop Common Issue Type: Sub-task Reporter: Sahil Takiar Assignee: Sahil Takiar Added unbuffer support to {{AbfsInputStream}} so that apps can cache ABFS file handles. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd
hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584758279 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 10 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 6s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 10s | trunk passed | | +1 :green_heart: | compile | 16m 54s | trunk passed | | +1 :green_heart: | checkstyle | 2m 41s | trunk passed | | +1 :green_heart: | mvnsite | 2m 35s | trunk passed | | +1 :green_heart: | shadedclient | 21m 12s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 40s | trunk passed | | +0 :ok: | spotbugs | 1m 9s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 17s | trunk passed | | -0 :warning: | patch | 1m 34s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 25s | the patch passed | | +1 :green_heart: | compile | 16m 27s | the patch passed | | +1 :green_heart: | javac | 16m 27s | the patch passed | | -0 :warning: | checkstyle | 2m 40s | root: The patch generated 6 new + 75 unchanged - 2 fixed = 81 total (was 77) | | +1 :green_heart: | mvnsite | 2m 21s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 15s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 44s | the patch passed | | +1 :green_heart: | findbugs | 3m 48s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 22s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 30s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 52s | The patch does not generate ASF License warnings. | | | | 124m 8s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1826 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux 14c16a167728 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cc8ae59 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/testReport/ | | Max. process+thread count | 1448 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsub
[GitHub] [hadoop] hadoop-yetus commented on issue #1843: HADOOP-16794. encryption over rename/copy
hadoop-yetus commented on issue #1843: HADOOP-16794. encryption over rename/copy URL: https://github.com/apache/hadoop/pull/1843#issuecomment-585232459 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 9s | trunk passed | | +1 :green_heart: | compile | 0m 35s | trunk passed | | +1 :green_heart: | checkstyle | 0m 27s | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | trunk passed | | +1 :green_heart: | shadedclient | 15m 3s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed | | +0 :ok: | spotbugs | 0m 59s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | checkstyle | 0m 18s | hadoop-tools/hadoop-aws: The patch generated 0 new + 11 unchanged - 3 fixed = 11 total (was 14) | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 46s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 21s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 58m 13s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1843/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1843 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7b0731ad836b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 749e45d | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1843/1/testReport/ | | Max. process+thread count | 420 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1843/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16858) S3Guard fsck: Add option to prune orphaned entries
Gabor Bota created HADOOP-16858: --- Summary: S3Guard fsck: Add option to prune orphaned entries Key: HADOOP-16858 URL: https://issues.apache.org/jira/browse/HADOOP-16858 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.3.0 Reporter: Gabor Bota -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation
steveloughran commented on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation URL: https://github.com/apache/hadoop/pull/1823#issuecomment-585208625 see #1843 for a patch on top of this, to help propagate settings better; let's keep discussion in this patch & mukund can cherrypick that new commit while we get this right. I think we should have a consistent policy here of 1. if the client has any encryption settings, including explicit AES256, KMS+default key, KMS+custom key, then they will set the encryption options on the copy. 2. else the encryption settings of the source file are retained. This is nice and memorable. It needs to apply for all s3a encryption settings; this patch now only does it for SSE-KMS. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1843: HADOOP-16794. encryption over rename/copy
steveloughran commented on issue #1843: HADOOP-16794. encryption over rename/copy URL: https://github.com/apache/hadoop/pull/1843#issuecomment-585207520 Testing: only tested this new test with * client set to SSE-KMS with explicit key * bucket in ireland with default encryption of AES-256. the new test shows the file -> SSE-KMS afterwards. Doesn't verify that if the src was SSE-KMS then what people want for key propagation holds. Nor does it verify that if the client is AES-256, renaming an SSE-KMS file will convert it to AES-256. Which is what I want for a simple, consistent model This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #1843: HADOOP-16794. encryption over rename/copy
steveloughran opened a new pull request #1843: HADOOP-16794. encryption over rename/copy URL: https://github.com/apache/hadoop/pull/1843 Patch atop Mukund's #1823 patch returns to copying sse algorithm header, but then * extracts full KMS settings from src and sets on request * overriding with S3A KMS client settings. * tries to test it better This is still not ready to go in. I think we should have a consistent policy here of 1. if the client has any encryption settings, including explicit AES256, KMS+default key, KMS+custom key, then they will set the encryption options on the copy. 2. else the encryption settings of the source file are retained. This is nice and memorable. It needs to apply for all s3a encryption settings; this patch only does it for SSE-KMS. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
bgaborg commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets URL: https://github.com/apache/hadoop/pull/1840#discussion_r378226384 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java ## @@ -969,16 +970,42 @@ private void deleteFileInListing() deleteFile(rawFS, testFilePath); // File status will be still readable from s3guard - FileStatus status = guardedFs.getFileStatus(testFilePath); + S3AFileStatus status = (S3AFileStatus) + guardedFs.getFileStatus(testFilePath); LOG.info("authoritative: {} status: {}", allowAuthoritative, status); - expectExceptionWhenReading(testFilePath, text); - expectExceptionWhenReadingOpenFileAPI(testFilePath, text, null); - expectExceptionWhenReadingOpenFileAPI(testFilePath, text, status); + if (isVersionedChangeDetection() && status.getVersionId() != null) { Review comment: I would rather add this to the test as a new private method. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
bgaborg commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets URL: https://github.com/apache/hadoop/pull/1840#discussion_r378223217 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java ## @@ -969,16 +970,42 @@ private void deleteFileInListing() deleteFile(rawFS, testFilePath); // File status will be still readable from s3guard - FileStatus status = guardedFs.getFileStatus(testFilePath); + S3AFileStatus status = (S3AFileStatus) + guardedFs.getFileStatus(testFilePath); LOG.info("authoritative: {} status: {}", allowAuthoritative, status); - expectExceptionWhenReading(testFilePath, text); - expectExceptionWhenReadingOpenFileAPI(testFilePath, text, null); - expectExceptionWhenReadingOpenFileAPI(testFilePath, text, status); + if (isVersionedChangeDetection() && status.getVersionId() != null) { +// when the status entry has a version ID, then that may be used +// when opening the file on what is clearly a versioned store. +int length = text.length(); +byte[] bytes = readOpenFileAPI(guardedFs, testFilePath, length, null); +Assertions.assertThat(toChar(bytes)) +.describedAs("openFile(%s)", testFilePath) +.isEqualTo(text); +// reading the rawFS with status will also work. +bytes = readOpenFileAPI(rawFS, testFilePath, length, status); Review comment: `bytes = readOpenFileAPI(rawFS, testFilePath, length, null);` won't fail (tested). Why should it fail? The`FileStatus` is just an additional parameter for the`FutureDataInputStreamBuilder`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16856) cmake is missing in the CentOS 8 section of BUILDING.txt
[ https://issues.apache.org/jira/browse/HADOOP-16856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035331#comment-17035331 ] Hudson commented on HADOOP-16856: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17944 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17944/]) HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt. (github: rev 749e45dfdb7de2cdaba7c5edb6939dfe62f297be) * (edit) BUILDING.txt > cmake is missing in the CentOS 8 section of BUILDING.txt > > > Key: HADOOP-16856 > URL: https://issues.apache.org/jira/browse/HADOOP-16856 > Project: Hadoop Common > Issue Type: Bug > Components: build, documentation >Reporter: Akira Ajisaka >Assignee: Masatake Iwasaki >Priority: Minor > Fix For: 3.3.0 > > > The following command does not install cmake by default: > {noformat} > $ sudo dnf group install 'Development Tools'{noformat} > cmake is an optional package and {{--with-optional}} should be specified. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15964) Add S3A support for Async Scatter/Gather IO
[ https://issues.apache.org/jira/browse/HADOOP-15964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota reassigned HADOOP-15964: --- Assignee: Gabor Bota > Add S3A support for Async Scatter/Gather IO > --- > > Key: HADOOP-15964 > URL: https://issues.apache.org/jira/browse/HADOOP-15964 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > > HADOOP-11867 is proposing adding a new scatter/gather IO API. > For an object store to take advantage of it, it should be doing things like > * coalescing reads even with a gap between them > * choosing an optimal ordering of requests > * submitting reads into the executor pool/using any async API provided by the > FS. > * detecting overlapping reads (and then what?) > * switching to HTTP 2 where supported > Do this for S3A -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16856) cmake is missing in the CentOS 8 section of BUILDING.txt
[ https://issues.apache.org/jira/browse/HADOOP-16856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-16856: -- Fix Version/s: 3.3.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thanks, [~aajisaka]. > cmake is missing in the CentOS 8 section of BUILDING.txt > > > Key: HADOOP-16856 > URL: https://issues.apache.org/jira/browse/HADOOP-16856 > Project: Hadoop Common > Issue Type: Bug > Components: build, documentation >Reporter: Akira Ajisaka >Assignee: Masatake Iwasaki >Priority: Minor > Fix For: 3.3.0 > > > The following command does not install cmake by default: > {noformat} > $ sudo dnf group install 'Development Tools'{noformat} > cmake is an optional package and {{--with-optional}} should be specified. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims merged pull request #1841: HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt.
iwasakims merged pull request #1841: HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt. URL: https://github.com/apache/hadoop/pull/1841 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1842: Hadoop 16730: Add Authorizer Interface
hadoop-yetus commented on issue #1842: Hadoop 16730: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#issuecomment-585164063 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 16s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 20s | trunk passed | | +1 :green_heart: | compile | 0m 26s | trunk passed | | +1 :green_heart: | checkstyle | 0m 20s | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | trunk passed | | +1 :green_heart: | shadedclient | 16m 31s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed | | +0 :ok: | spotbugs | 0m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 52s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 24s | the patch passed | | +1 :green_heart: | javac | 0m 24s | the patch passed | | -0 :warning: | checkstyle | 0m 17s | hadoop-tools/hadoop-azure: The patch generated 74 new + 9 unchanged - 0 fixed = 83 total (was 9) | | +1 :green_heart: | mvnsite | 0m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 11s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 20s | hadoop-tools_hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | -1 :x: | findbugs | 0m 59s | hadoop-tools/hadoop-azure generated 23 new + 0 unchanged - 0 fixed = 23 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 63m 9s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-azure | | | Dead store to qualifiedPath in org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getXAttr(Path, String) At AzureBlobFileSystem.java:org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getXAttr(Path, String) At AzureBlobFileSystem.java:[line 690] | | | Dead store to qualifiedPath in org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.setXAttr(Path, String, byte[], EnumSet) At AzureBlobFileSystem.java:org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.setXAttr(Path, String, byte[], EnumSet) At AzureBlobFileSystem.java:[line 655] | | | Redundant nullcheck of authorizer, which is known to be non-null in org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(URI, String, String, boolean) Redundant null check at AzureBlobFileSystemStore.java:is known to be non-null in org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(URI, String, String, boolean) Redundant null check at AzureBlobFileSystemStore.java:[line 1151] | | | org.apache.hadoop.fs.azurebfs.constants.AbfsAuthorizerConstants.APPEND_ACTION isn't final but should be At AbfsAuthorizerConstants.java:be At AbfsAuthorizerConstants.java:[line 46] | | | org.apache.hadoop.fs.azurebfs.constants.AbfsAuthorizerConstants.CREATEFILE_ACTION isn't final but should be At AbfsAuthorizerConstants.java:be At AbfsAuthorizerConstants.java:[line 39] | | | org.apache.hadoop.fs.azurebfs.constants.AbfsAuthorizerConstants.DELETE_ACTION isn't final but should be At AbfsAuthorizerConstants.java:be At AbfsAuthorizerConstants.java:[line 38] | | | org.apache.hadoop.fs.azurebfs.constants.AbfsAuthorizerConstants.GETACL_ACTION isn't final but should be At AbfsAuthorizerConstants.java:be At AbfsAuthorizerConstants.java:[line 41] | | | org.apache.hadoop.fs.azurebfs.constants.AbfsAuthorizerConstants.GETFILESTATUS_ACTION isn't final but should be At AbfsAuthorizerConstants.java:be At AbfsAuthorizerConstants.java:[line 42] | | | org.apache.hadoop.fs.azurebfs.constants.AbfsAuthorizerConstants.LISTSTATUS_ACTION isn't final but should be At AbfsAuthorizerConstants.java:be At AbfsAuthorizerConstants.java:[line 37] | | | org.apache.hadoop.fs.azurebfs.constants.AbfsAuthorizerConstants.MKDIR_ACTION isn't final but should be At AbfsAuthorizerConstants.java:be At AbfsAuthorizerConstants.java:[line 40] |
[GitHub] [hadoop] steveloughran commented on issue #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
steveloughran commented on issue #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets URL: https://github.com/apache/hadoop/pull/1840#issuecomment-585147169 > `bytes = readOpenFileAPI(rawFS, testFilePath, length, null);` should still fail right? Do you think we can also that in this `if` clause? yeah, you are right. I'll include that. It should be gone from the rawfs This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
hadoop-yetus removed a comment on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584831964 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 12s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 14s | trunk passed | | +1 :green_heart: | compile | 0m 33s | trunk passed | | +1 :green_heart: | checkstyle | 0m 26s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 16m 30s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 56s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 26s | the patch passed | | -0 :warning: | checkstyle | 0m 17s | hadoop-tools/hadoop-aws: The patch generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15) | | +1 :green_heart: | mvnsite | 0m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 25s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 17s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 64m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1838 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 2732dfab6c4e 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / e637797 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/5/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/5/testReport/ | | Max. process+thread count | 425 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur removed a comment on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
mukund-thakur removed a comment on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584804743 > if closing the `fs` value triggers failures in superclass cleanup, then you are sharing an FS instance between test cases. (i.e you are actually picking up the last one created). That is fixed now. That was a mistake from my side. Closing "fs" is not causing any problem in superclass cleanup now. One other thing to notice here is there is only one test case where the 'fs' is actually created. All others are just failure scenarios. > If you disable caching you should get a new one, which you can then close safely Already disabled file system caching. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya opened a new pull request #1842: Hadoop 16730: Add Authorizer Interface
snvijaya opened a new pull request #1842: Hadoop 16730: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842 Enabling Authorizer Interface as a plugin This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1838#issuecomment-585103560 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 45s | trunk passed | | +1 :green_heart: | compile | 0m 35s | trunk passed | | +1 :green_heart: | checkstyle | 0m 23s | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | trunk passed | | +1 :green_heart: | shadedclient | 16m 26s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed | | +0 :ok: | spotbugs | 0m 59s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 57s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 18s | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 16s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 65m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1838 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 75831ca7b3e5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9709afe | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/6/testReport/ | | Max. process+thread count | 341 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/6/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
hadoop-yetus commented on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#issuecomment-585102571 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 40s | trunk passed | | +1 :green_heart: | compile | 0m 40s | trunk passed | | +1 :green_heart: | checkstyle | 0m 24s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 16m 42s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 51s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 49s | trunk passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 13s | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 13s | hadoop-azure in the patch failed. | | -1 :x: | javac | 0m 13s | hadoop-azure in the patch failed. | | -0 :warning: | checkstyle | 0m 12s | The patch fails to run checkstyle in hadoop-azure | | -1 :x: | mvnsite | 0m 14s | hadoop-azure in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 27s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 16s | hadoop-azure in the patch failed. | | -1 :x: | findbugs | 0m 14s | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 15s | hadoop-azure in the patch failed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 62m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1790 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux b5e26bda998d 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9709afe | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1790/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/testReport/ | | Max. process+thread count | 334 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/6/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated.
[GitHub] [hadoop] hadoop-yetus commented on issue #1841: HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt.
hadoop-yetus commented on issue #1841: HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt. URL: https://github.com/apache/hadoop/pull/1841#issuecomment-585095438 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 13s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | shadedclient | 15m 32s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 16s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 30s | The patch does not generate ASF License warnings. | | | | 34m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1841/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1841 | | Optional Tests | dupname asflicense | | uname | Linux de1a3a77d24b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 9709afe | | Max. process+thread count | 305 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1841/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()
mukund-thakur commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init() URL: https://github.com/apache/hadoop/pull/1838#issuecomment-585086587 All review comments addressed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16856) cmake is missing in the CentOS 8 section of BUILDING.txt
[ https://issues.apache.org/jira/browse/HADOOP-16856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-16856: -- Status: Patch Available (was: Open) > cmake is missing in the CentOS 8 section of BUILDING.txt > > > Key: HADOOP-16856 > URL: https://issues.apache.org/jira/browse/HADOOP-16856 > Project: Hadoop Common > Issue Type: Bug > Components: build, documentation >Reporter: Akira Ajisaka >Assignee: Masatake Iwasaki >Priority: Minor > > The following command does not install cmake by default: > {noformat} > $ sudo dnf group install 'Development Tools'{noformat} > cmake is an optional package and {{--with-optional}} should be specified. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims opened a new pull request #1841: HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt.
iwasakims opened a new pull request #1841: HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt. URL: https://github.com/apache/hadoop/pull/1841 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16856) cmake is missing in the CentOS 8 section of BUILDING.txt
[ https://issues.apache.org/jira/browse/HADOOP-16856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035133#comment-17035133 ] Masatake Iwasaki commented on HADOOP-16856: --- [~aajisaka] you are right. I will submit a PR. > cmake is missing in the CentOS 8 section of BUILDING.txt > > > Key: HADOOP-16856 > URL: https://issues.apache.org/jira/browse/HADOOP-16856 > Project: Hadoop Common > Issue Type: Bug > Components: build, documentation >Reporter: Akira Ajisaka >Assignee: Masatake Iwasaki >Priority: Minor > > The following command does not install cmake by default: > {noformat} > $ sudo dnf group install 'Development Tools'{noformat} > cmake is an optional package and {{--with-optional}} should be specified. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16856) cmake is missing in the CentOS 8 section of BUILDING.txt
[ https://issues.apache.org/jira/browse/HADOOP-16856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki reassigned HADOOP-16856: - Assignee: Masatake Iwasaki > cmake is missing in the CentOS 8 section of BUILDING.txt > > > Key: HADOOP-16856 > URL: https://issues.apache.org/jira/browse/HADOOP-16856 > Project: Hadoop Common > Issue Type: Bug > Components: build, documentation >Reporter: Akira Ajisaka >Assignee: Masatake Iwasaki >Priority: Minor > > The following command does not install cmake by default: > {noformat} > $ sudo dnf group install 'Development Tools'{noformat} > cmake is an optional package and {{--with-optional}} should be specified. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org