[GitHub] [hadoop] hadoop-yetus commented on issue #1478: HDFS-14856. Fetch file ACLs while mounting external store.
hadoop-yetus commented on issue #1478: HDFS-14856. Fetch file ACLs while mounting external store. URL: https://github.com/apache/hadoop/pull/1478#issuecomment-541273779 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 77 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for branch | | +1 | mvninstall | 1270 | trunk passed | | +1 | compile | 1180 | trunk passed | | +1 | checkstyle | 177 | trunk passed | | +1 | mvnsite | 110 | trunk passed | | +1 | shadedclient | 1159 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 111 | trunk passed | | 0 | spotbugs | 44 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 222 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 22 | Maven dependency ordering for patch | | +1 | mvninstall | 88 | the patch passed | | +1 | compile | 1027 | the patch passed | | +1 | javac | 1027 | the patch passed | | +1 | checkstyle | 217 | root: The patch generated 0 new + 453 unchanged - 1 fixed = 453 total (was 454) | | +1 | mvnsite | 116 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 808 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 114 | the patch passed | | +1 | findbugs | 239 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 6094 | hadoop-hdfs in the patch failed. | | +1 | unit | 47 | hadoop-fs2img in the patch passed. | | +1 | asflicense | 54 | The patch does not generate ASF License warnings. | | | | 13185 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1478 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux c42458573b42 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c561a70 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/11/testReport/ | | Max. process+thread count | 2838 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-fs2img U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/11/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liusheng commented on issue #1546: HADOOP-16614. [COMMON] Add aarch64 support of the dependent leveldbjni
liusheng commented on issue #1546: HADOOP-16614. [COMMON] Add aarch64 support of the dependent leveldbjni URL: https://github.com/apache/hadoop/pull/1546#issuecomment-541273158 Hi @vinayakumarb @Apache9, Cloud you please take a look this, thank you :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liusheng commented on issue #1546: HADOOP-16614. [COMMON] Add aarch64 support of the dependent leveldbjni
liusheng commented on issue #1546: HADOOP-16614. [COMMON] Add aarch64 support of the dependent leveldbjni URL: https://github.com/apache/hadoop/pull/1546#issuecomment-541273012 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls
[ https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949859#comment-16949859 ] lqjacklee commented on HADOOP-15961: [~ste...@apache.org] Thanks the reply , I will create another patch for this task. > S3A committers: make sure there's regular progress() calls > -- > > Key: HADOOP-15961 > URL: https://issues.apache.org/jira/browse/HADOOP-15961 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Minor > Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch, > HADOOP-15961-003.patch > > > MAPREDUCE-7164 highlights how inside job/task commit more context.progress() > callbacks are needed, just for HDFS. > the S3A committers should be reviewed similarly. > At a glance: > StagingCommitter.commitTaskInternal() is at risk if a task write upload > enough data to the localfs that the upload takes longer than the timeout. > it should call progress it every single file commits, or better: modify > {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks > after every part upload. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16643) Update netty4 to the latest 4.1.42
[ https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949833#comment-16949833 ] Wei-Chiu Chuang commented on HADOOP-16643: -- Everything looks good so far. Doesn't seem to break downstreams. Let me know if there are any objections otherwise I'll commit this to trunk next week. I'm a little hesitate to cherry pick the commit to lower branches, but I probably will. > Update netty4 to the latest 4.1.42 > -- > > Key: HADOOP-16643 > URL: https://issues.apache.org/jira/browse/HADOOP-16643 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Lisheng Sun >Priority: Major > Attachments: HADOOP-16643.001.patch > > > The latest netty is out. Let's update it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL
[ https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949827#comment-16949827 ] Wei-Chiu Chuang commented on HADOOP-13836: -- I think this work is superseded by HADOOP-15977 where Daryn has made good progress. > Securing Hadoop RPC using SSL > - > > Key: HADOOP-13836 > URL: https://issues.apache.org/jira/browse/HADOOP-13836 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: kartheek muthyala >Assignee: kartheek muthyala >Priority: Major > Attachments: HADOOP-13836-v2.patch, HADOOP-13836-v3.patch, > HADOOP-13836-v4.patch, HADOOP-13836.patch, Secure IPC OSS Proposal-1.pdf, > SecureIPC Performance Analysis-OSS.pdf > > > Today, RPC connections in Hadoop are encrypted using Simple Authentication & > Security Layer (SASL), with the Kerberos ticket based authentication or > Digest-md5 checksum based authentication protocols. This proposal is about > enhancing this cipher suite with SSL/TLS based encryption and authentication. > SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that > provides data security and integrity across two different end points in a > network. This protocol has made its way to a number of applications such as > web browsing, email, internet faxing, messaging, VOIP etc. And supporting > this cipher suite at the core of Hadoop would give a good synergy with the > applications on top and also bolster industry adoption of Hadoop. > The Server and Client code in Hadoop IPC should support the following modes > of communication > 1.Plain > 2. SASL encryption with an underlying authentication > 3. SSL based encryption and authentication (x509 certificate) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2
[ https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949824#comment-16949824 ] Wei-Chiu Chuang edited comment on HADOOP-15169 at 10/11/19 10:49 PM: - Thanks [~xyao]! Good idea. I'll update the patch accordingly. However, Jetty won't let me enable SSLv2Hello only. I must also enable another protocol otherwise it fails right away. I think that's by design. So this doesn't really need a test case. was (Author: jojochuang): Thanks [~xyao]! Good idea. I'll update the patch accordingly. If Jetty won't let me enable SSLv2Hello only. I must also enable another protocol otherwise it fails right away. I think that's by design. So this doesn't really need a test case. > "hadoop.ssl.enabled.protocols" should be considered in httpserver2 > -- > > Key: HADOOP-15169 > URL: https://issues.apache.org/jira/browse/HADOOP-15169 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, > HADOOP-15169.patch > > > As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the > http servers( only Datanodehttp server will use this config). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2
[ https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949824#comment-16949824 ] Wei-Chiu Chuang commented on HADOOP-15169: -- Thanks [~xyao]! Good idea. I'll update the patch accordingly. If Jetty won't let me enable SSLv2Hello only. I must also enable another protocol otherwise it fails right away. I think that's by design. So this doesn't really need a test case. > "hadoop.ssl.enabled.protocols" should be considered in httpserver2 > -- > > Key: HADOOP-15169 > URL: https://issues.apache.org/jira/browse/HADOOP-15169 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, > HADOOP-15169.patch > > > As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the > http servers( only Datanodehttp server will use this config). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on issue #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka
arp7 commented on issue #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka URL: https://github.com/apache/hadoop/pull/1637#issuecomment-541230130 One thing I missed - where is the serialization of the exception message done over the wire? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka
arp7 commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka URL: https://github.com/apache/hadoop/pull/1637#discussion_r334176751 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java ## @@ -219,7 +244,7 @@ private OMResponse submitRequestDirectlyToOM(OMRequest request) { omClientResponse = omClientRequest.validateAndUpdateCache( ozoneManager, index, ozoneManagerDoubleBuffer::add); } -} catch(IOException ex) { +} catch(OMException ex) { Review comment: I think we don't need this catch block at all, as the caller will do the right thing via the call to `createErrorResponse`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka
arp7 commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka URL: https://github.com/apache/hadoop/pull/1637#discussion_r334175139 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -200,15 +200,7 @@ import static org.apache.hadoop.ozone.OzoneConsts.OM_METRICS_FILE; import static org.apache.hadoop.ozone.OzoneConsts.OM_METRICS_TEMP_FILE; import static org.apache.hadoop.ozone.OzoneConsts.RPC_PORT; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_HANDLER_COUNT_DEFAULT; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_HANDLER_COUNT_KEY; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_KERBEROS_KEYTAB_FILE_KEY; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_KERBEROS_PRINCIPAL_KEY; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_METRICS_SAVE_INTERVAL; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_METRICS_SAVE_INTERVAL_DEFAULT; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_USER_MAX_VOLUME; -import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_USER_MAX_VOLUME_DEFAULT; +import static org.apache.hadoop.ozone.om.OMConfigKeys.*; Review comment: Probably better to avoid static wild-card import. Some IDEs do this by default. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka
arp7 commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka URL: https://github.com/apache/hadoop/pull/1637#discussion_r334173951 ## File path: hadoop-hdds/common/src/main/resources/ozone-default.xml ## @@ -1641,6 +1641,20 @@ + +ozone.om.exception.stacktrace.propagate Review comment: Instead of making it an OM-specific setting, we could make it a global setting for all services. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16612) Track Azure Blob File System client-perceived latency
[ https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949782#comment-16949782 ] Jeetesh Mangwani commented on HADOOP-16612: --- I have fixed the checkstyle issues. Please take a look. When I run tests on my machine, I see 4 failing tests on my 'trunk' branch. These tests fail on my feature branch too. 1. ITestGetNameSpaceEnabled.testNonXNSAccount: fails because the HTTP response status is not 400, but is 404 2. ITestAzureBlobFileSystemCLI.testMkdirRootNonExistentContainer: fails in the setup phase because the HTTP response status is not 400, but is 404 3. ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek: times out, probably because this is a scale test and my machine is slow 4. ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads: times out, probably because there are lot of heavy writes and my machine is slow Here are the details: trunk, non-xns [INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Failures: [ERROR] ITestGetNameSpaceEnabled.testNonXNSAccount:59->Assert.assertFalse:64->Assert.assertTrue:41->Assert.fail:88 Expecting getIsNamespaceEnabled() return false [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemCLI>AbstractAbfsIntegrationTest.setup:137 » AbfsRestOperation [INFO] [ERROR] Tests run: 395, Failures: 1, Errors: 1, Skipped: 21 [ERROR] Errors: [ERROR] ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:60->testReadWriteAndSeek:75 » TestTimedOut [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut [INFO] [ERROR] Tests run: 192, Failures: 0, Errors: 2, Skipped: 24 === feature branch, non-xns [INFO] Tests run: 53, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Failures: [ERROR] ITestGetNameSpaceEnabled.testNonXNSAccount:59->Assert.assertFalse:64->Assert.assertTrue:41->Assert.fail:88 Expecting getIsNamespaceEnabled() return false [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemCLI>AbstractAbfsIntegrationTest.setup:137 » AbfsRestOperation [INFO] [ERROR] Tests run: 395, Failures: 1, Errors: 1, Skipped: 21 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:60->testReadWriteAndSeek:75 » TestTimedOut [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut [INFO] [ERROR] Tests run: 192, Failures: 0, Errors: 2, Skipped: 24 trunk, xns [INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemCLI>AbstractAbfsIntegrationTest.setup:137 » AbfsRestOperation [INFO] [ERROR] Tests run: 395, Failures: 0, Errors: 1, Skipped: 21 [ERROR] Errors: [ERROR] ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:60->testReadWriteAndSeek:75 » TestTimedOut [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut [INFO] [ERROR] Tests run: 192, Failures: 0, Errors: 2, Skipped: 24 = feature branch, xns [INFO] Tests run: 53, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemCLI>AbstractAbfsIntegrationTest.setup:137 » AbfsRestOperation [INFO] [ERROR] Tests run: 395, Failures: 0, Errors: 1, Skipped: 21 [ERROR] Errors: [ERROR] ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:60->testReadWriteAndSeek:75 » TestTimedOut [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut [INFO] [ERROR] Tests run: 192, Failures: 0, Errors: 2, Skipped: 24 Error stack traces: [ERROR] testMkdirRootNonExistentContainer(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemCLI) Time elapsed: 18.27 s <<< ERROR! Operation failed: "The specified filesystem does not exist.", 404, HEAD, https://abfstest02.dfs.core.windows.net/abfs-testcontainer-aa478873-647e-455a-9a71-4cb6da30a088//?upn=false=getAccessControl=90 at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:143) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:523) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:506) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:224) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getIsNamespaceEnabled(AzureBlobFileSystem.java:1108) at org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.setup(AbstractAbfsIntegrationTest.java:137) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[GitHub] [hadoop] arp7 commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints
arp7 commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints URL: https://github.com/apache/hadoop/pull/1622#discussion_r334164946 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScanner.java ## @@ -95,14 +97,19 @@ public void runIteration() { while (!stopping && itr.hasNext()) { Container c = itr.next(); if (c.shouldScanData()) { +ContainerData containerData = c.getContainerData(); +long containerId = containerData.getContainerID(); try { + logScanStart(containerData); if (!c.scanData(throttler, canceler)) { metrics.incNumUnHealthyContainers(); -controller.markContainerUnhealthy( -c.getContainerData().getContainerID()); +controller.markContainerUnhealthy(containerId); Review comment: We should also call `logScanCompleted` and `updateDataScanTimestamp` in the failure path. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints
arp7 commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints URL: https://github.com/apache/hadoop/pull/1622#discussion_r334163690 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java ## @@ -89,7 +91,9 @@ private HddsVolume volume; private String checksum; - public static final Charset CHARSET_ENCODING = Charset.forName("UTF-8"); + private Long dataScanTimestamp; Review comment: Also can you add a comment stating what the number means? Is it Unix epoch? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16612) Track Azure Blob File System client-perceived latency
[ https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942356#comment-16942356 ] Jeetesh Mangwani edited comment on HADOOP-16612 at 10/11/19 8:30 PM: - Here's the PR: https://github.com/apache/hadoop/pull/1611 was (Author: jeeteshm): Here's the PR: https://github.com/apache/hadoop/pull/1569 > Track Azure Blob File System client-perceived latency > - > > Key: HADOOP-16612 > URL: https://issues.apache.org/jira/browse/HADOOP-16612 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, hdfs-client >Reporter: Jeetesh Mangwani >Assignee: Jeetesh Mangwani >Priority: Major > > Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring > latencies in the Hadoop ABFS driver. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints
arp7 commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints URL: https://github.com/apache/hadoop/pull/1622#discussion_r334163381 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java ## @@ -89,7 +91,9 @@ private HddsVolume volume; private String checksum; - public static final Charset CHARSET_ENCODING = Charset.forName("UTF-8"); + private Long dataScanTimestamp; Review comment: Can you make this a Java `Optional`. Then instead of `null` we can check for `Optional.absent`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1478: HDFS-14856. Fetch file ACLs while mounting external store.
hadoop-yetus commented on issue #1478: HDFS-14856. Fetch file ACLs while mounting external store. URL: https://github.com/apache/hadoop/pull/1478#issuecomment-541207425 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 91 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 28 | Maven dependency ordering for branch | | +1 | mvninstall | 1382 | trunk passed | | +1 | compile | 1100 | trunk passed | | +1 | checkstyle | 177 | trunk passed | | +1 | mvnsite | 112 | trunk passed | | +1 | shadedclient | 1136 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 113 | trunk passed | | 0 | spotbugs | 44 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 226 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 22 | Maven dependency ordering for patch | | +1 | mvninstall | 87 | the patch passed | | +1 | compile | 1023 | the patch passed | | +1 | javac | 1023 | the patch passed | | -0 | checkstyle | 173 | root: The patch generated 1 new + 453 unchanged - 1 fixed = 454 total (was 454) | | +1 | mvnsite | 112 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 795 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 112 | the patch passed | | +1 | findbugs | 238 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 5995 | hadoop-hdfs in the patch failed. | | +1 | unit | 43 | hadoop-fs2img in the patch passed. | | +1 | asflicense | 57 | The patch does not generate ASF License warnings. | | | | 13063 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1478 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 78fab810aa8e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/10/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/10/testReport/ | | Max. process+thread count | 2680 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-fs2img U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1478/10/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1611: Hadoop 16612 Track Azure Blob File System client-perceived latency
hadoop-yetus commented on issue #1611: Hadoop 16612 Track Azure Blob File System client-perceived latency URL: https://github.com/apache/hadoop/pull/1611#issuecomment-541205791 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 40 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1077 | trunk passed | | +1 | compile | 33 | trunk passed | | +1 | checkstyle | 25 | trunk passed | | +1 | mvnsite | 35 | trunk passed | | +1 | shadedclient | 781 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 28 | trunk passed | | 0 | spotbugs | 52 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 50 | trunk passed | | -0 | patch | 77 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 30 | the patch passed | | +1 | compile | 26 | the patch passed | | +1 | javac | 26 | the patch passed | | -0 | checkstyle | 18 | hadoop-tools/hadoop-azure: The patch generated 8 new + 5 unchanged - 0 fixed = 13 total (was 5) | | +1 | mvnsite | 28 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 777 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 24 | the patch passed | | +1 | findbugs | 57 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 85 | hadoop-azure in the patch passed. | | +1 | asflicense | 30 | The patch does not generate ASF License warnings. | | | | 3240 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1611/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1611 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux aac8856df1c2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c561a70 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1611/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1611/2/testReport/ | | Max. process+thread count | 438 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1611/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2
[ https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949694#comment-16949694 ] Xiaoyu Yao commented on HADOOP-15169: - Thanks [~brahmareddy] and [~weichiu] for the patch. It looks good to me overall. I just have one suggestion w.r.t. the handling of the excluded protocols. By default SslContextFactory will set the following ("SSL", "SSLv2", "SSLv2Hello", "SSLv3") to the excluded protocol. Instead of always reset the excluded protocol to empty, we should remove only those contained in the enabledProtocols from the excluded protocol. This way, we don't allow weak protocols not in the enable list. Please also add a test case to ensure if use add SSLv2Hello to included protocol, SSL/SSLv2/SSLv3 should not be allowed. > "hadoop.ssl.enabled.protocols" should be considered in httpserver2 > -- > > Key: HADOOP-15169 > URL: https://issues.apache.org/jira/browse/HADOOP-15169 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, > HADOOP-15169.patch > > > As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the > http servers( only Datanodehttp server will use this config). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 merged pull request #1556: HDDS-2213. Reduce key provider loading log level in OzoneFileSystem#ge…
arp7 merged pull request #1556: HDDS-2213. Reduce key provider loading log level in OzoneFileSystem#ge… URL: https://github.com/apache/hadoop/pull/1556 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
xiaoyuyao commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#discussion_r334089125 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java ## @@ -1654,28 +1655,28 @@ public boolean checkAccess(OzoneObj ozObject, RequestContext context) metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volume, bucket); try { validateBucket(volume, bucket); - OmKeyInfo keyInfo = null; - try { -OzoneFileStatus fileStatus = getFileStatus(args); -keyInfo = fileStatus.getKeyInfo(); -if (keyInfo == null) { - // the key does not exist, but it is a parent "dir" of some key - // let access be determined based on volume/bucket/prefix ACL - if (LOG.isDebugEnabled()) { -LOG.debug("key:{} is non-existent parent, permit access to user:{}", -keyName, context.getClientUgi()); - } - return true; -} - } catch (OMException e) { -if (e.getResult() == FILE_NOT_FOUND) { - keyInfo = metadataManager.getOpenKeyTable().get(objectKey); + OmKeyInfo keyInfo; + + if (ozObject.getResourceType() == OPEN_KEY) { Review comment: what's the difference between OPEN_KEY->CREATE and KEY->CREATE? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
xiaoyuyao commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#discussion_r334088572 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneObj.java ## @@ -95,6 +95,7 @@ public StoreType getStoreType() { VOLUME(OzoneConsts.VOLUME), BUCKET(OzoneConsts.BUCKET), KEY(OzoneConsts.KEY), +OPEN_KEY(OzoneConsts.OPEN_KEY), Review comment: Can you add some comment on why OPEN_KEY is needed as ozone object type? Do we have the corresponding acl type semantics documented somewhere? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
xiaoyuyao commented on a change in pull request #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#discussion_r334088572 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneObj.java ## @@ -95,6 +95,7 @@ public StoreType getStoreType() { VOLUME(OzoneConsts.VOLUME), BUCKET(OzoneConsts.BUCKET), KEY(OzoneConsts.KEY), +OPEN_KEY(OzoneConsts.OPEN_KEY), Review comment: Can you add some comment on OPEN_KEY is needed as ozone object type? Do we have the corresponding acl type semantics documented somewhere? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation
hadoop-yetus commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation URL: https://github.com/apache/hadoop/pull/1619#issuecomment-541135916 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 43 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1104 | trunk passed | | +1 | compile | 35 | trunk passed | | +1 | checkstyle | 29 | trunk passed | | +1 | mvnsite | 39 | trunk passed | | +1 | shadedclient | 813 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 28 | trunk passed | | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 58 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 34 | the patch passed | | +1 | compile | 29 | the patch passed | | +1 | javac | 29 | the patch passed | | +1 | checkstyle | 21 | the patch passed | | +1 | mvnsite | 34 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 809 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 29 | hadoop-tools_hadoop-aws generated 0 new + 4 unchanged - 1 fixed = 4 total (was 5) | | +1 | findbugs | 62 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 75 | hadoop-aws in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 3381 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1619 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 529d7def58f5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/3/testReport/ | | Max. process+thread count | 450 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1645: YARN-9881. Change Cluster_Scheduler_API's Item memory‘s datatype from int to long.
hadoop-yetus commented on issue #1645: YARN-9881. Change Cluster_Scheduler_API's Item memory‘s datatype from int to long. URL: https://github.com/apache/hadoop/pull/1645#issuecomment-541134837 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 40 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 43 | Maven dependency ordering for branch | | +1 | mvninstall | 1140 | trunk passed | | +1 | compile | 522 | trunk passed | | +1 | checkstyle | 90 | trunk passed | | +1 | mvnsite | 181 | trunk passed | | +1 | shadedclient | 997 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 109 | trunk passed | | 0 | spotbugs | 22 | Used deprecated FindBugs config; considering switching to SpotBugs. | | 0 | findbugs | 22 | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no findbugs output file (findbugsXml.xml) | | 0 | findbugs | 22 | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | 0 | mvndep | 18 | Maven dependency ordering for patch | | +1 | mvninstall | 108 | the patch passed | | -1 | jshint | 215 | The patch generated 1761 new + 0 unchanged - 0 fixed = 1761 total (was 0) | | +1 | compile | 477 | the patch passed | | +1 | javac | 477 | the patch passed | | -0 | checkstyle | 83 | hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 21 unchanged - 0 fixed = 23 total (was 21) | | +1 | mvnsite | 162 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 747 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 101 | the patch passed | | 0 | findbugs | 20 | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from findbugs | | 0 | findbugs | 20 | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui has no data from findbugs | ||| _ Other Tests _ | | +1 | unit | 171 | hadoop-yarn-server-common in the patch passed. | | -1 | unit | 5027 | hadoop-yarn-server-resourcemanager in the patch failed. | | +1 | unit | 18 | hadoop-yarn-site in the patch passed. | | +1 | unit | 230 | hadoop-yarn-ui in the patch passed. | | +1 | asflicense | 49 | The patch does not generate ASF License warnings. | | | | 10922 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.resourcemanager.webapp.fairscheduler.TestRMWebServicesFairSchedulerCustomResourceTypes | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesContainers | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppAttempts | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1645/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1645 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle jshint | | uname | Linux a05722bb49c5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | jshint | https://builds.apache.org/job/hadoop-multibranch/job/PR-1645/1/artifact/out/diff-patch-jshint.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1645/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1645/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1645/1/testReport/ | | Max. process+thread count | 816 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1645/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 jshint=2.10.2 | | Powered
[GitHub] [hadoop] steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation
steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation URL: https://github.com/apache/hadoop/pull/1619#issuecomment-541132415 (note, that stack is at debug, the users just see the location unknown message- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation
steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation URL: https://github.com/apache/hadoop/pull/1619#issuecomment-541132228 Also just ran the CLI against a public bucket which blocks this operation ``` Filesystem s3a://tpcds10g 2019-10-11 17:24:14,361 [main] DEBUG s3a.Invoker (DurationInfo.java:(74)) - Starting: getBucketLocation() 2019-10-11 17:24:14,472 [main] DEBUG s3a.Invoker (DurationInfo.java:close(89)) - getBucketLocation(): duration 0:00.110s 2019-10-11 17:24:14,473 [main] DEBUG s3guard.S3GuardTool (S3GuardTool.java:run(1232)) - failed to get bucket location java.nio.file.AccessDeniedException: tpcds10g: getBucketLocation() on tpcds10g: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: CE32462FD451F00D; S3 Extended Request ID: /pM+yWUtyByovVFTzOHPDDEQhzQAuF9zVrimxhbzaX6b8iYv6pgGO9cNbhL30eZ9wOTBcGpyvIY=), S3 Extended Request ID: /pM+yWUtyByovVFTzOHPDDEQhzQAuF9zVrimxhbzaX6b8iYv6pgGO9cNbhL30eZ9wOTBcGpyvIY=:AccessDenied at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:112) at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:315) at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:311) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:286) at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:741) at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:724) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1227) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:429) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1816) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1825) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: CE32462FD451F00D; S3 Extended Request ID: /pM+yWUtyByovVFTzOHPDDEQhzQAuF9zVrimxhbzaX6b8iYv6pgGO9cNbhL30eZ9wOTBcGpyvIY=), S3 Extended Request ID: /pM+yWUtyByovVFTzOHPDDEQhzQAuF9zVrimxhbzaX6b8iYv6pgGO9cNbhL30eZ9wOTBcGpyvIY= at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4860) at com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:999) at com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:1005) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getBucketLocation$3(S3AFileSystem.java:742) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110) ... 11 more Location unknown -caller lacks s3:GetBucketLocation permission Filesystem s3a://tpcds10g is not using S3Guard The "magic" committer is supported S3A Client Signing Algorithm: fs.s3a.signing-algorithm=(unset) Endpoint: fs.s3a.endpoint=(unset) Encryption: fs.s3a.server-side-encryption-algorithm=none Input seek policy: fs.s3a.experimental.input.fadvise=normal Change Detection Source: fs.s3a.change.detection.source=etag Change Detection Mode: fs.s3a.change.detection.mode=server Delegation token support is disabled ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation
hadoop-yetus removed a comment on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation URL: https://github.com/apache/hadoop/pull/1619#issuecomment-539612732 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 74 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1071 | trunk passed | | +1 | compile | 36 | trunk passed | | +1 | checkstyle | 28 | trunk passed | | +1 | mvnsite | 40 | trunk passed | | +1 | shadedclient | 788 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 30 | trunk passed | | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 57 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 34 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | +1 | checkstyle | 20 | the patch passed | | +1 | mvnsite | 32 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 780 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 27 | hadoop-tools_hadoop-aws generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 | findbugs | 61 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 87 | hadoop-aws in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 3330 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1619 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d042b3ad22c1 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 91320b4 | | Default Java | 1.8.0_222 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/1/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/1/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation
hadoop-yetus commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation URL: https://github.com/apache/hadoop/pull/1619#issuecomment-541130201 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 92 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1333 | trunk passed | | +1 | compile | 32 | trunk passed | | +1 | checkstyle | 23 | trunk passed | | +1 | mvnsite | 36 | trunk passed | | +1 | shadedclient | 854 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | trunk passed | | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 58 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 32 | the patch passed | | +1 | compile | 26 | the patch passed | | +1 | javac | 26 | the patch passed | | +1 | checkstyle | 19 | the patch passed | | +1 | mvnsite | 30 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 926 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 30 | hadoop-tools_hadoop-aws generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 | findbugs | 71 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 102 | hadoop-aws in the patch passed. | | +1 | asflicense | 37 | The patch does not generate ASF License warnings. | | | | 3837 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1619 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 88e94afb263b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/2/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/2/testReport/ | | Max. process+thread count | 337 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1630: HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
hadoop-yetus commented on issue #1630: HADOOP-16645. S3A Delegation Token extension point to use StoreContext. URL: https://github.com/apache/hadoop/pull/1630#issuecomment-541129627 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 85 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 9 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1260 | trunk passed | | +1 | compile | 33 | trunk passed | | +1 | checkstyle | 26 | trunk passed | | +1 | mvnsite | 37 | trunk passed | | +1 | shadedclient | 875 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | trunk passed | | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 58 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 33 | the patch passed | | +1 | compile | 26 | the patch passed | | +1 | javac | 26 | the patch passed | | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 5 new + 20 unchanged - 0 fixed = 25 total (was 20) | | +1 | mvnsite | 31 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 851 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 24 | the patch passed | | +1 | findbugs | 61 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 80 | hadoop-aws in the patch passed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 3654 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1630/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1630 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3a7cbd19644f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1630/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1630/2/testReport/ | | Max. process+thread count | 421 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1630/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller
[ https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897291#comment-16897291 ] Steve Loughran edited comment on HADOOP-16478 at 10/11/19 4:14 PM: --- {code} java.nio.file.AccessDeniedException:something: getBucketLocation() on s3a://restricted: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 030653A1119B53A7; S3 Extended Request ID: lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), S3 Extended Request ID: lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=:AccessDenied at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111) at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314) at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285) at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:716) at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:703) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1185) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:401) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1672) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1681) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 030653A1119B53A7; S3 Extended Request ID: lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), S3 Extended Request ID: lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw= at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4860) at com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:999) at com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:1005) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getBucketLocation$3(S3AFileSystem.java:717) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) ... 11 more {code} was (Author: ste...@apache.org): {code} java.nio.file.AccessDeniedException: mow-dev-istio-west-demo: getBucketLocation() on s3a://restricted: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 030653A1119B53A7; S3 Extended Request ID: lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), S3 Extended Request ID: lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=:AccessDenied at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111) at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314) at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285) at org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:716) at
[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL
[ https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949579#comment-16949579 ] hirik commented on HADOOP-13836: [~kartheek] is there any release timeline for this feature? > Securing Hadoop RPC using SSL > - > > Key: HADOOP-13836 > URL: https://issues.apache.org/jira/browse/HADOOP-13836 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: kartheek muthyala >Assignee: kartheek muthyala >Priority: Major > Attachments: HADOOP-13836-v2.patch, HADOOP-13836-v3.patch, > HADOOP-13836-v4.patch, HADOOP-13836.patch, Secure IPC OSS Proposal-1.pdf, > SecureIPC Performance Analysis-OSS.pdf > > > Today, RPC connections in Hadoop are encrypted using Simple Authentication & > Security Layer (SASL), with the Kerberos ticket based authentication or > Digest-md5 checksum based authentication protocols. This proposal is about > enhancing this cipher suite with SSL/TLS based encryption and authentication. > SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that > provides data security and integrity across two different end points in a > network. This protocol has made its way to a number of applications such as > web browsing, email, internet faxing, messaging, VOIP etc. And supporting > this cipher suite at the core of Hadoop would give a good synergy with the > applications on top and also bolster industry adoption of Hadoop. > The Server and Client code in Hadoop IPC should support the following modes > of communication > 1.Plain > 2. SASL encryption with an underlying authentication > 3. SSL based encryption and authentication (x509 certificate) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard
hadoop-yetus commented on issue #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard URL: https://github.com/apache/hadoop/pull/1646#issuecomment-541124056 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 52 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1178 | trunk passed | | +1 | compile | 37 | trunk passed | | +1 | checkstyle | 28 | trunk passed | | +1 | mvnsite | 41 | trunk passed | | +1 | shadedclient | 841 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 30 | trunk passed | | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 59 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 34 | the patch passed | | +1 | compile | 29 | the patch passed | | +1 | javac | 29 | the patch passed | | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 4 new + 18 unchanged - 0 fixed = 22 total (was 18) | | +1 | mvnsite | 34 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 837 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 24 | the patch passed | | +1 | findbugs | 62 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 87 | hadoop-aws in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 3543 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1646 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9e6a187f2297 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/2/testReport/ | | Max. process+thread count | 417 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1626: YARN-9880. In YARN ui2 attempts tab, The running Application Attempt's ElapsedTime is incorrect.
hadoop-yetus commented on issue #1626: YARN-9880. In YARN ui2 attempts tab, The running Application Attempt's ElapsedTime is incorrect. URL: https://github.com/apache/hadoop/pull/1626#issuecomment-541122591 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 81 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 28 | Maven dependency ordering for branch | | +1 | mvninstall | 1375 | trunk passed | | +1 | compile | 558 | trunk passed | | +1 | checkstyle | 88 | trunk passed | | +1 | mvnsite | 168 | trunk passed | | +1 | shadedclient | 1121 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 98 | trunk passed | | 0 | spotbugs | 18 | Used deprecated FindBugs config; considering switching to SpotBugs. | | 0 | findbugs | 18 | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no findbugs output file (findbugsXml.xml) | | 0 | findbugs | 18 | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | 0 | mvndep | 17 | Maven dependency ordering for patch | | +1 | mvninstall | 104 | the patch passed | | -1 | jshint | 225 | The patch generated 1761 new + 0 unchanged - 0 fixed = 1761 total (was 0) | | +1 | compile | 517 | the patch passed | | +1 | javac | 517 | the patch passed | | -0 | checkstyle | 93 | hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 21 unchanged - 0 fixed = 23 total (was 21) | | +1 | mvnsite | 153 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 899 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 94 | the patch passed | | 0 | findbugs | 16 | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from findbugs | | 0 | findbugs | 18 | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui has no data from findbugs | ||| _ Other Tests _ | | +1 | unit | 177 | hadoop-yarn-server-common in the patch passed. | | -1 | unit | 5277 | hadoop-yarn-server-resourcemanager in the patch failed. | | +1 | unit | 15 | hadoop-yarn-site in the patch passed. | | +1 | unit | 253 | hadoop-yarn-ui in the patch passed. | | +1 | asflicense | 44 | The patch does not generate ASF License warnings. | | | | 11821 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppAttempts | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1626 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle jshint | | uname | Linux 4db16c9dd15b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | jshint | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/3/artifact/out/diff-patch-jshint.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/3/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/3/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/3/testReport/ | | Max. process+thread count | 808 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 jshint=2.10.2 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is
[GitHub] [hadoop] vivekratnavel commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
vivekratnavel commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#issuecomment-541117783 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16645) S3A Delegation Token extension point to use StoreContext
[ https://issues.apache.org/jira/browse/HADOOP-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16645: Status: Patch Available (was: Open) > S3A Delegation Token extension point to use StoreContext > > > Key: HADOOP-16645 > URL: https://issues.apache.org/jira/browse/HADOOP-16645 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > Move the S3A DT code from HADOOP-14556 to take a StoreContext ref in its > ctor, over a S3AFileSystem -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16613) s3a to set fake directory marker contentType to application/x-directory
[ https://issues.apache.org/jira/browse/HADOOP-16613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949553#comment-16949553 ] Steve Loughran commented on HADOOP-16613: - # should we ourselves say content-type == application/x-directory means it is a dir, irrespective of len # how to react to something without a / which says it is an x-directory? > s3a to set fake directory marker contentType to application/x-directory > --- > > Key: HADOOP-16613 > URL: https://issues.apache.org/jira/browse/HADOOP-16613 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1, 3.1.3 >Reporter: Jose Torres >Priority: Minor > > S3AFileSystem doesn't set a contentType for fake directory files, causing it > to be inferred as "application/octet-stream". But fake directory files > created through the S3 web console have content type > "application/x-directory". We may want to adopt the web console behavior as a > standard, since some systems will rely on content type and not size + > trailing slash to determine if an object represents a directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard
hadoop-yetus commented on issue #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard URL: https://github.com/apache/hadoop/pull/1646#issuecomment-54084 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 45 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1311 | trunk passed | | +1 | compile | 38 | trunk passed | | +1 | checkstyle | 25 | trunk passed | | +1 | mvnsite | 41 | trunk passed | | +1 | shadedclient | 824 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 32 | trunk passed | | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 61 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 38 | the patch passed | | +1 | compile | 29 | the patch passed | | +1 | javac | 29 | the patch passed | | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 4 new + 18 unchanged - 0 fixed = 22 total (was 18) | | +1 | mvnsite | 34 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 878 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | the patch passed | | +1 | findbugs | 64 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 91 | hadoop-aws in the patch passed. | | +1 | asflicense | 35 | The patch does not generate ASF License warnings. | | | | 3707 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1646 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 00309628183a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/1/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16607) s3a attempts to look up password/encryption fail if JCEKS file unreadable
[ https://issues.apache.org/jira/browse/HADOOP-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16607. - Resolution: Duplicate > s3a attempts to look up password/encryption fail if JCEKS file unreadable > - > > Key: HADOOP-16607 > URL: https://issues.apache.org/jira/browse/HADOOP-16607 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, security >Affects Versions: 3.2.1, 3.1.3 >Reporter: Steve Loughran >Priority: Minor > > Hive deployments can use a JCEKs file to store secrets, which it sets up > To be readable only by the Hive user, listing it under > hadoop.credential.providers. > When it tries to create an S3A FS instance as another user, via a doAs{} > clause, the S3A FS getPassword() call fails on the subsequent > AccessDeniedException -even if the secret it is looking for is in the XML file > or, as in the case of encryption settings, or session key undefined. > I can you point the blame at hive for this -it's the one with a forbidden > JCEKS file on the provider path, but I think it is easiest to fix in S3AUtils > than > in hive, and safer then changing Configuration. > ABFS is likely to see the same problem. > I propose an option to set the fallback policy. > I initially thought about always handling this: > Catching the exception, attempting to downgrade to Reading XML and if that > fails rethrowing the caught exception. > However, this will do the wrong thing if the option is completely undefined, > As is common with the encryption settings. > I don't want to simply default to log and continue here though, as it may be > a legitimate failure -such as when you really do want to read secrets from > such a source. > Issue: what fallback policies? > > * fail: fail fast. today's policy; the default. > * ignore: log and continue > > We could try and be clever in future. To get away with that, we would have > to declare which options were considered compulsory and re-throw the caught > Exception if no value was found in the XML file. > > That can be a future enhancement -but it is why I want the policy to be an > enumeration, rather than a simple boolean. > > Tests: should be straightforward; set hadoop.credential.providers to a > non-existent file and expected to be processed according to the settings. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1313: HDFS-13118. SnapshotDiffReport should provide the INode type.
hadoop-yetus commented on issue #1313: HDFS-13118. SnapshotDiffReport should provide the INode type. URL: https://github.com/apache/hadoop/pull/1313#issuecomment-541110738 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 45 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for branch | | +1 | mvninstall | 1150 | trunk passed | | +1 | compile | 212 | trunk passed | | +1 | checkstyle | 68 | trunk passed | | +1 | mvnsite | 123 | trunk passed | | +1 | shadedclient | 931 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 121 | trunk passed | | 0 | spotbugs | 172 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 310 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 12 | Maven dependency ordering for patch | | +1 | mvninstall | 117 | the patch passed | | +1 | compile | 206 | the patch passed | | -1 | cc | 206 | hadoop-hdfs-project generated 4 new + 15 unchanged - 4 fixed = 19 total (was 19) | | +1 | javac | 206 | the patch passed | | +1 | checkstyle | 58 | hadoop-hdfs-project: The patch generated 0 new + 384 unchanged - 6 fixed = 384 total (was 390) | | +1 | mvnsite | 112 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 779 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 110 | the patch passed | | +1 | findbugs | 329 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 125 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 5268 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 44 | The patch does not generate ASF License warnings. | | | | 10163 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1313 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 905b772da59e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/15/artifact/out/diff-compile-cc-hadoop-hdfs-project.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/15/testReport/ | | Max. process+thread count | 3885 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/15/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller
[ https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16478: Status: Patch Available (was: Open) > S3Guard bucket-info fails if the bucket location is denied to the caller > > > Key: HADOOP-16478 > URL: https://issues.apache.org/jira/browse/HADOOP-16478 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > IF you call "Hadoop s3guard bucket info" on a bucket and you don't have > permission to list the bucket location, then you get a stack trace, with all > other diagnostics being missing. > Preferred: catch the exception, warn its unknown and only log@ debug -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller
[ https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16478: --- Assignee: Steve Loughran > S3Guard bucket-info fails if the bucket location is denied to the caller > > > Key: HADOOP-16478 > URL: https://issues.apache.org/jira/browse/HADOOP-16478 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > IF you call "Hadoop s3guard bucket info" on a bucket and you don't have > permission to list the bucket location, then you get a stack trace, with all > other diagnostics being missing. > Preferred: catch the exception, warn its unknown and only log@ debug -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls
[ https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949537#comment-16949537 ] Hadoop QA commented on HADOOP-15961: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-15961 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15961 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12965295/HADOOP-15961-003.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16589/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > S3A committers: make sure there's regular progress() calls > -- > > Key: HADOOP-15961 > URL: https://issues.apache.org/jira/browse/HADOOP-15961 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Minor > Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch, > HADOOP-15961-003.patch > > > MAPREDUCE-7164 highlights how inside job/task commit more context.progress() > callbacks are needed, just for HDFS. > the S3A committers should be reviewed similarly. > At a glance: > StagingCommitter.commitTaskInternal() is at risk if a task write upload > enough data to the localfs that the upload takes longer than the timeout. > it should call progress it every single file commits, or better: modify > {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks > after every part upload. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16324) S3A Delegation Token code to spell "Marshalled" as Marshaled
[ https://issues.apache.org/jira/browse/HADOOP-16324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949534#comment-16949534 ] Steve Loughran commented on HADOOP-16324: - I'm doing this in the HADOOP-16645 PR as that's backwards incompatible too -for this to go in it'll need co-ordination with those people who are using the current release (sorry!) > S3A Delegation Token code to spell "Marshalled" as Marshaled > > > Key: HADOOP-16324 > URL: https://issues.apache.org/jira/browse/HADOOP-16324 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > > Apparently {{MarshalledCredentials}} is the EN_UK locality spelling; the > EN_US one is {{Marshaled}}. Fix in code and docs before anything ships, > because those classes do end up being used by all external implementations of > S3A Delegation Tokens. > I am grateful to [~rlevas] for pointing out the error of my ways. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.
hadoop-yetus commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD. URL: https://github.com/apache/hadoop/pull/1601#issuecomment-541105186 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 82 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1280 | trunk passed | | +1 | compile | 32 | trunk passed | | +1 | checkstyle | 24 | trunk passed | | +1 | mvnsite | 38 | trunk passed | | +1 | shadedclient | 883 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | trunk passed | | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 61 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 32 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | +1 | checkstyle | 19 | the patch passed | | +1 | mvnsite | 34 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 876 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 22 | the patch passed | | +1 | findbugs | 64 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 69 | hadoop-aws in the patch passed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 3698 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1601/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1601 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 22c4d6fb1156 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1601/6/testReport/ | | Max. process+thread count | 424 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1601/6/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ehiggs opened a new pull request #1648: HDFS-12478. Command line tools for managing Provided Storage Backup m…
ehiggs opened a new pull request #1648: HDFS-12478. Command line tools for managing Provided Storage Backup m… URL: https://github.com/apache/hadoop/pull/1648 FYI @virajith. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1599: HADOOP-16632. Speculating & Partitioned S3A magic committers can leave pending files under __magic
hadoop-yetus removed a comment on issue #1599: HADOOP-16632. Speculating & Partitioned S3A magic committers can leave pending files under __magic URL: https://github.com/apache/hadoop/pull/1599#issuecomment-538521435 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1069 | trunk passed | | +1 | compile | 36 | trunk passed | | +1 | checkstyle | 28 | trunk passed | | +1 | mvnsite | 38 | trunk passed | | +1 | shadedclient | 800 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 28 | trunk passed | | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 59 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 33 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 1 new + 6 unchanged - 1 fixed = 7 total (was 7) | | +1 | mvnsite | 33 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 773 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | the patch passed | | +1 | findbugs | 61 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 85 | hadoop-aws in the patch passed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 3289 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1599/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1599 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3cabdd8543c2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 10bdc59 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1599/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1599/1/testReport/ | | Max. process+thread count | 447 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1599/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus
[ https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15920: Resolution: Done Status: Resolved (was: Patch Available) > get patch for S3a nextReadPos(), through Yetus > -- > > Key: HADOOP-15920 > URL: https://issues.apache.org/jira/browse/HADOOP-15920 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.1 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Major > Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, > HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, > HADOOP-15870-008.patch, HADOOP-15920-06.patch, HADOOP-15920-07.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16644) Retrive modtime of PUT file from store, via response or HEAD
[ https://issues.apache.org/jira/browse/HADOOP-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949514#comment-16949514 ] Steve Loughran commented on HADOOP-16644: - Rummaging around the open JIRAs,HADOOP-16176 proposes adding more tests, and highlights that the modtime of a multipart upload may be that of the start time, not the end time. So for a big put, it will be wy off. that HEAD sounds critical there > Retrive modtime of PUT file from store, via response or HEAD > > > Key: HADOOP-16644 > URL: https://issues.apache.org/jira/browse/HADOOP-16644 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.3.0 > Environment: -Dparallel-tests -DtestsThreadCount=8 > -Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale > h2. Hypothesis: > the timestamp of the source file is being picked up from S3Guard, but when > the NM does a getFileStatus call, a HEAD check is made -and this (due to the > overloaded test system) is out of sync with the listing. S3Guard is updated, > the corrected date returned and the localisation fails. >Reporter: Steve Loughran >Priority: Major > > Terasort of directory committer failing in resource localisaton -the > partitions.lst file has a different TS from that expected > Happens under loaded integration tests (threads = 8; not standalone); > non-auth s3guard > {code} > 2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN > localizer.ResourceLocalizationService > (ResourceLocalizationService.java:processHeartbeat(1150)) - { > s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, > 1570531828143, FILE, null } failed: Resource > s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst > changed on src filesystem (expected 1570531828143, was 1570531828000 > java.io.IOException: Resource > s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst > changed on src filesystem (expected 1570531828143, was 1570531828000 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16164) S3aDelegationTokens to add accessor for tests to get at the token binding
[ https://issues.apache.org/jira/browse/HADOOP-16164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16164: --- Assignee: (was: Steve Loughran) > S3aDelegationTokens to add accessor for tests to get at the token binding > - > > Key: HADOOP-16164 > URL: https://issues.apache.org/jira/browse/HADOOP-16164 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Critical > > For testing, it turns out to be useful to get at the current token binding in > the S3ADelegationTokens instance of a filesystem. > provide an accessor, tagged as for testing only -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls
[ https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949512#comment-16949512 ] Steve Loughran commented on HADOOP-15961: - hey, now I've finally got the available patch in, let's wrap this one up too. Can you start with a github PR off trunk? thanks > S3A committers: make sure there's regular progress() calls > -- > > Key: HADOOP-15961 > URL: https://issues.apache.org/jira/browse/HADOOP-15961 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: lqjacklee >Priority: Minor > Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch, > HADOOP-15961-003.patch > > > MAPREDUCE-7164 highlights how inside job/task commit more context.progress() > callbacks are needed, just for HDFS. > the S3A committers should be reviewed similarly. > At a glance: > StagingCommitter.commitTaskInternal() is at risk if a task write upload > enough data to the localfs that the upload takes longer than the timeout. > it should call progress it every single file commits, or better: modify > {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks > after every part upload. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16635) S3A innerGetFileStatus scans for directories-only still does a HEAD
[ https://issues.apache.org/jira/browse/HADOOP-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16635: Status: Patch Available (was: Open) > S3A innerGetFileStatus scans for directories-only still does a HEAD > --- > > Key: HADOOP-16635 > URL: https://issues.apache.org/jira/browse/HADOOP-16635 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > > The patch in HADOOP-16490 is incomplete: we are still checking for the Head > of each object, even though we only wanted the directory checks. As a result, > createFile is still vulnerable to 404 caching on unguarded S3 repos. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16632) Speculating & Partitioned S3A magic committers can leave pending files under __magic
[ https://issues.apache.org/jira/browse/HADOOP-16632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16632: Status: Patch Available (was: Open) > Speculating & Partitioned S3A magic committers can leave pending files under > __magic > > > Key: HADOOP-16632 > URL: https://issues.apache.org/jira/browse/HADOOP-16632 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.3, 3.2.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > Partitioned S3A magic committers can leaving pending files, maybe upload data > This surfaced in an assertion failure on a parallel test run. > I thought it was actually a test failure, but with HADOOP-16207 all the docs > are preserved in the local FS and I can understand what happened. > h3. Junit process > {code} > [INFO] > [ERROR] Failures: > [ERROR] > ITestS3ACommitterMRJob.test_200_execute:344->customPostExecutionValidation:356 > Expected a java.io.FileNotFoundException to be thrown, but got the result: : > "Found magic dir which should have been deleted at > S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic; > isDirectory=true; modification_time=0; access_time=0; owner=stevel; > group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; > isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null > versionId=null > [s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8 > s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8.pending > {code} > Full details to follow in the comment as they are, well, detailed. > > Key point: AM-side job and task cleanup can happen before the worker task > finishes its writes. This will result in files under __magic. It may result > in pending uploads too -but only if the write began after the AM job cleanup > did a list + abort of all pending uploads under the destination directory -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16651) S3 getBucketLocation() can return "US" for us-east
[ https://issues.apache.org/jira/browse/HADOOP-16651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16651: Summary: S3 getBucketLocation() can return "US" for us-east (was: s3 getBucketLocation() can return "US" for us-east) > S3 getBucketLocation() can return "US" for us-east > -- > > Key: HADOOP-16651 > URL: https://issues.apache.org/jira/browse/HADOOP-16651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1, 3.1.3 >Reporter: Steve Loughran >Priority: Major > > see: https://forums.aws.amazon.com/thread.jspa?messageID=796829=0 > apparently getBucketLocation can return US for a region when it is really > us-east-1 > this confuses DDB region calculation, which needs the us-east value. > proposed: change it in S3AFS.getBucketLocation -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16651) S3 getBucketLocation() can return "US" for us-east
[ https://issues.apache.org/jira/browse/HADOOP-16651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16651: --- Assignee: Steve Loughran > S3 getBucketLocation() can return "US" for us-east > -- > > Key: HADOOP-16651 > URL: https://issues.apache.org/jira/browse/HADOOP-16651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1, 3.1.3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > see: https://forums.aws.amazon.com/thread.jspa?messageID=796829=0 > apparently getBucketLocation can return US for a region when it is really > us-east-1 > this confuses DDB region calculation, which needs the us-east value. > proposed: change it in S3AFS.getBucketLocation -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16651) s3 getBucketLocation() can return "US" for us-east
[ https://issues.apache.org/jira/browse/HADOOP-16651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949511#comment-16949511 ] Steve Loughran commented on HADOOP-16651: - including this in the HADOOP-16478 patch, pulling up code in HADOOP-16599 > s3 getBucketLocation() can return "US" for us-east > -- > > Key: HADOOP-16651 > URL: https://issues.apache.org/jira/browse/HADOOP-16651 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1, 3.1.3 >Reporter: Steve Loughran >Priority: Major > > see: https://forums.aws.amazon.com/thread.jspa?messageID=796829=0 > apparently getBucketLocation can return US for a region when it is really > us-east-1 > this confuses DDB region calculation, which needs the us-east value. > proposed: change it in S3AFS.getBucketLocation -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1614: HADOOP-16615. Add password check for credential provider
steveloughran commented on issue #1614: HADOOP-16615. Add password check for credential provider URL: https://github.com/apache/hadoop/pull/1614#issuecomment-541085339 code is good, you just need to deal with those checkstyles, which are mostly indentation and a couple of minor line lengths This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ehiggs closed pull request #1647: HDFS-13310. The DatanodeProtocol should be have DNA_BACKUP to backup blocks.
ehiggs closed pull request #1647: HDFS-13310. The DatanodeProtocol should be have DNA_BACKUP to backup blocks. URL: https://github.com/apache/hadoop/pull/1647 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ehiggs opened a new pull request #1647: HDFS-13310. The DatanodeProtocol should be have DNA_BACKUP to backup blocks.
ehiggs opened a new pull request #1647: HDFS-13310. The DatanodeProtocol should be have DNA_BACKUP to backup blocks. URL: https://github.com/apache/hadoop/pull/1647 FYI @virajith. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949492#comment-16949492 ] Wei-Chiu Chuang commented on HADOOP-16152: -- This patch breaks the Tez version we use. I think we need TEZ-4083 too. {noformat} 2019-10-08 21:46:59,220 [INFO] [main] |service.AbstractService|: Service org.apache.tez.dag.app.DAGAppMaster failed in state STARTED org.apache.hadoop.service.ServiceStateException: java.lang.NoClassDefFoundError: org/eclipse/jetty/util/ClassVisibilityChecker at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105) at org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1968) at org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2035) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2682) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) at org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2678) at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2484) Caused by: java.lang.NoClassDefFoundError: org/eclipse/jetty/util/ClassVisibilityChecker at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.hadoop.yarn.webapp.WebApps.$for(WebApps.java:509) at org.apache.hadoop.yarn.webapp.WebApps.$for(WebApps.java:515) {noformat} > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, > HADOOP-16152.003.patch, HADOOP-16152.004.patch, HADOOP-16152.005.patch, > HADOOP-16152.006.patch, HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13907) Fix TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on Windows
[ https://issues.apache.org/jira/browse/HADOOP-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949485#comment-16949485 ] Hadoop QA commented on HADOOP-13907: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 53s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-13907 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12957327/HADOOP-13907.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 650eef2d26b2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ec86f42 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16588/testReport/ | | Max. process+thread count | 1379 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16588/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Fix TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on Windows >
[GitHub] [hadoop] hadoop-yetus commented on issue #1626: YARN-9880. In YARN ui2 attempts tab, The running Application Attempt's ElapsedTime is incorrect.
hadoop-yetus commented on issue #1626: YARN-9880. In YARN ui2 attempts tab, The running Application Attempt's ElapsedTime is incorrect. URL: https://github.com/apache/hadoop/pull/1626#issuecomment-541070260 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 109 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 16 | Maven dependency ordering for branch | | +1 | mvninstall | 1362 | trunk passed | | +1 | compile | 629 | trunk passed | | +1 | checkstyle | 86 | trunk passed | | +1 | mvnsite | 122 | trunk passed | | +1 | shadedclient | 1023 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 70 | trunk passed | | 0 | spotbugs | 15 | Used deprecated FindBugs config; considering switching to SpotBugs. | | 0 | findbugs | 15 | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | 0 | mvndep | 16 | Maven dependency ordering for patch | | +1 | mvninstall | 99 | the patch passed | | -1 | jshint | 237 | The patch generated 1761 new + 0 unchanged - 0 fixed = 1761 total (was 0) | | -1 | compile | 179 | hadoop-yarn in the patch failed. | | -1 | javac | 179 | hadoop-yarn in the patch failed. | | -0 | checkstyle | 88 | hadoop-yarn-project/hadoop-yarn: The patch generated 2 new + 21 unchanged - 0 fixed = 23 total (was 21) | | +1 | mvnsite | 107 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 894 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 72 | the patch passed | | 0 | findbugs | 16 | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui has no data from findbugs | ||| _ Other Tests _ | | +1 | unit | 163 | hadoop-yarn-server-common in the patch passed. | | -1 | unit | 5320 | hadoop-yarn-server-resourcemanager in the patch failed. | | +1 | unit | 324 | hadoop-yarn-ui in the patch passed. | | +1 | asflicense | 51 | The patch does not generate ASF License warnings. | | | | 11372 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppAttempts | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1626 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle jshint | | uname | Linux 278607d675bf 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4a700c2 | | Default Java | 1.8.0_222 | | jshint | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/2/artifact/out/diff-patch-jshint.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/2/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/2/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/2/testReport/ | | Max. process+thread count | 818 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1626/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 jshint=2.10.2 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message
[GitHub] [hadoop] steveloughran opened a new pull request #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard
steveloughran opened a new pull request #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard URL: https://github.com/apache/hadoop/pull/1646 path qualification in s3a fs strips any trailing / ; with tests This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran edited a comment on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.
steveloughran edited a comment on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD. URL: https://github.com/apache/hadoop/pull/1601#issuecomment-541059565 1. updated the docs. The only place we don't do Head and dir marker is in create() 1. also added a test to verify that the empty set of probes skips all http requests Now. can you create a Path with a trailing / ? I was about to say no, but remembered https://issues.apache.org/jira/browse/HADOOP-15430 .. one of the constructors of Path does let you get away with it, which is something which breaks S3Guard already This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.
steveloughran commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD. URL: https://github.com/apache/hadoop/pull/1601#issuecomment-541059565 updated the docs. The only place we don't do Head and dir marker is in create() Now. can you create a Path with a trailing / ? I was about to say no, but remembered https://issues.apache.org/jira/browse/HADOOP-15430 .. one of the constructors of Path does let you get away with it, which is something which breaks S3Guard already This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException
[ https://issues.apache.org/jira/browse/HADOOP-16580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949408#comment-16949408 ] Szilard Nemeth commented on HADOOP-16580: - Hi [~adam.antal]! Thanks for the patch! Actually, I'm with [~shuzirra] on this one: Without your excellent explanation, I wouldn't understand why the method is called failsWithAccessControlExceptionEightTimes. As you mentioned: Could you please incorporate your explanation into javadoc, as much as possible? I don't only mean for the above method, but any other part of code you feel needs some explanation. Apart from this, I could give a +1 for this, when you have the javadocs in place. Thanks! > Disable retry of FailoverOnNetworkExceptionRetry in case of > AccessControlException > -- > > Key: HADOOP-16580 > URL: https://issues.apache.org/jira/browse/HADOOP-16580 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: HADOOP-16580.001.patch, HADOOP-16580.002.patch > > > HADOOP-14982 handled the case where a SaslException is thrown. The issue > still persists, since the exception that is thrown is an > *AccessControlException* because user has no kerberos credentials. > My suggestion is that we should add this case as well to > {{FailoverOnNetworkExceptionRetry}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16651) s3 getBucketLocation() can return "US" for us-east
Steve Loughran created HADOOP-16651: --- Summary: s3 getBucketLocation() can return "US" for us-east Key: HADOOP-16651 URL: https://issues.apache.org/jira/browse/HADOOP-16651 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.1.3, 3.2.1 Reporter: Steve Loughran see: https://forums.aws.amazon.com/thread.jspa?messageID=796829=0 apparently getBucketLocation can return US for a region when it is really us-east-1 this confuses DDB region calculation, which needs the us-east value. proposed: change it in S3AFS.getBucketLocation -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1614: HADOOP-16615. Add password check for credential provider
hadoop-yetus commented on issue #1614: HADOOP-16615. Add password check for credential provider URL: https://github.com/apache/hadoop/pull/1614#issuecomment-541045012 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 91 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1313 | trunk passed | | +1 | compile | 1184 | trunk passed | | +1 | checkstyle | 42 | trunk passed | | +1 | mvnsite | 79 | trunk passed | | +1 | shadedclient | 930 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 93 | trunk passed | | 0 | spotbugs | 141 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 138 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 63 | the patch passed | | +1 | compile | 1102 | the patch passed | | +1 | javac | 1102 | the patch passed | | -0 | checkstyle | 49 | hadoop-common-project/hadoop-common: The patch generated 12 new + 23 unchanged - 0 fixed = 35 total (was 23) | | +1 | mvnsite | 87 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 831 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 87 | the patch passed | | +1 | findbugs | 142 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 587 | hadoop-common in the patch passed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 6958 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1614 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 75c6b5958cb3 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4a700c2 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/5/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/5/testReport/ | | Max. process+thread count | 1338 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] cjn082030 opened a new pull request #1645: YARN-9881. Change Cluster_Scheduler_API's Item memory‘s datatype from int to long.
cjn082030 opened a new pull request #1645: YARN-9881. Change Cluster_Scheduler_API's Item memory‘s datatype from int to long. URL: https://github.com/apache/hadoop/pull/1645 The Yarn Rest http://rm-http-address:port/ws/v1/cluster/scheduler document, In hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API, change Item memory‘s datatype from int to long. 1.change Capacity Scheduler API's item [memory]'s dataType from int to long. 2. change Fair Scheduler API's item [memory]'s dataType from int to long. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16520) Race condition in DDB table init and waiting threads
[ https://issues.apache.org/jira/browse/HADOOP-16520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949397#comment-16949397 ] Hudson commented on HADOOP-16520: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17524 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17524/]) HADOOP-16520. Race condition in DDB table init and waiting threads. (github: rev 4a700c20d553dc5336ee881719bcf189fc46bfbf) * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestPathMetadataDynamoDBTranslation.java * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestDynamoDBMiscOperations.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PathMetadataDynamoDBTranslation.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTableAccess.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java * (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStoreTableManager.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java > Race condition in DDB table init and waiting threads > > > Key: HADOOP-16520 > URL: https://issues.apache.org/jira/browse/HADOOP-16520 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > > s3guard threads waiting for table creation completion can be scheduled before > the creating thread, look for the version marker and then fail. > window will be sleep times in AWS SDK Table.waitForActive(); -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13907) Fix TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on Windows
[ https://issues.apache.org/jira/browse/HADOOP-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949391#comment-16949391 ] Ayush Saxena commented on HADOOP-13907: --- Thanx [~knanasi] for the patch. Faced similar issue on windows, The patch v001 fixes it for me. Will push this by EOD if no objections. > Fix TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on Windows > -- > > Key: HADOOP-13907 > URL: https://issues.apache.org/jira/browse/HADOOP-13907 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0 >Reporter: Xiaoyu Yao >Assignee: Kitti Nanasi >Priority: Major > Labels: kerberos > Attachments: HADOOP-13907.001.patch > > > Running unit test > TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on windows > will fail with {{java.lang.IllegalArgumentException: Can't get Kerberos > realm}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13907) Fix TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on Windows
[ https://issues.apache.org/jira/browse/HADOOP-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HADOOP-13907: -- Summary: Fix TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on Windows (was: Fix KerberosUtil#getDefaultRealm() on Windows) > Fix TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on Windows > -- > > Key: HADOOP-13907 > URL: https://issues.apache.org/jira/browse/HADOOP-13907 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0 >Reporter: Xiaoyu Yao >Assignee: Kitti Nanasi >Priority: Major > Labels: kerberos > Attachments: HADOOP-13907.001.patch > > > Running unit test > TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on windows > will fail with {{java.lang.IllegalArgumentException: Can't get Kerberos > realm}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1614: HADOOP-16615. Add password check for credential provider
hadoop-yetus commented on issue #1614: HADOOP-16615. Add password check for credential provider URL: https://github.com/apache/hadoop/pull/1614#issuecomment-541025900 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 83 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1226 | trunk passed | | +1 | compile | 1082 | trunk passed | | +1 | checkstyle | 45 | trunk passed | | +1 | mvnsite | 80 | trunk passed | | +1 | shadedclient | 938 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 87 | trunk passed | | 0 | spotbugs | 128 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 126 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 48 | the patch passed | | +1 | compile | 1026 | the patch passed | | +1 | javac | 1026 | the patch passed | | -0 | checkstyle | 46 | hadoop-common-project/hadoop-common: The patch generated 12 new + 22 unchanged - 0 fixed = 34 total (was 22) | | +1 | mvnsite | 78 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 806 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 85 | the patch passed | | +1 | findbugs | 130 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 596 | hadoop-common in the patch passed. | | +1 | asflicense | 48 | The patch does not generate ASF License warnings. | | | | 6614 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1614 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b101a7cec088 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f267917 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/4/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/4/testReport/ | | Max. process+thread count | 1378 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation
steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation URL: https://github.com/apache/hadoop/pull/1619#issuecomment-541023708 javadoc failure ``` [WARNING] /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1619/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java:445: warning - Tag @link: reference not found: ExitUtil.ExitException ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.
steveloughran commented on a change in pull request #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD. URL: https://github.com/apache/hadoop/pull/1601#discussion_r333942954 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -2730,39 +2730,41 @@ S3AFileStatus innerGetFileStatus(final Path f, * @throws FileNotFoundException when the path does not exist * @throws IOException on other problems. */ + @VisibleForTesting @Retries.RetryTranslated - private S3AFileStatus s3GetFileStatus(final Path path, - String key, + S3AFileStatus s3GetFileStatus(final Path path, + final String key, final Set probes, final Set tombstones) throws IOException { -if (!key.isEmpty() && probes.contains(StatusProbeEnum.Head)) { - try { -ObjectMetadata meta = getObjectMetadata(key); - -if (objectRepresentsDirectory(key, meta.getContentLength())) { - LOG.debug("Found exact file: fake directory"); - return new S3AFileStatus(Tristate.TRUE, path, username); -} else { - LOG.debug("Found exact file: normal file"); +if (!key.isEmpty()) { + if (probes.contains(StatusProbeEnum.Head) && !key.endsWith("/")) { Review comment: yes. That's exactly my thought. note: none of this API is public, its for avoiding problems on ops where we don't want to look for a file but do for a dir marker. And AFAIK, you can't go from a Path to a / as we strip that off. How about 1. I do a review of all places where we don't ask for the HEAD? So far I think I'm only doing it in create 1. I clarify in javadocs This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1614: HADOOP-16615. Add password check for credential provider
hadoop-yetus commented on issue #1614: HADOOP-16615. Add password check for credential provider URL: https://github.com/apache/hadoop/pull/1614#issuecomment-541020841 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 1794 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1067 | trunk passed | | +1 | compile | 1016 | trunk passed | | +1 | checkstyle | 51 | trunk passed | | +1 | mvnsite | 86 | trunk passed | | +1 | shadedclient | 881 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 92 | trunk passed | | 0 | spotbugs | 123 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 120 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 49 | the patch passed | | +1 | compile | 968 | the patch passed | | +1 | javac | 968 | the patch passed | | -0 | checkstyle | 51 | hadoop-common-project/hadoop-common: The patch generated 12 new + 22 unchanged - 0 fixed = 34 total (was 22) | | +1 | mvnsite | 80 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 718 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 94 | the patch passed | | +1 | findbugs | 132 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 545 | hadoop-common in the patch passed. | | +1 | asflicense | 55 | The patch does not generate ASF License warnings. | | | | 7897 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1614 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9f23c3ae07b4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f267917 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/3/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/3/testReport/ | | Max. process+thread count | 1374 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem
steveloughran commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1591#issuecomment-541018861 FYI @bgaborg @ehiggs This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem
steveloughran commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1591#issuecomment-541018276 Thinking a bit about what a followup patch for cross-store copy would be; I think it'd be how I I think the Multipart Upload API needs to go. There'd be an abstract copier class you'd get an instance of from the dest fs to make 1+ copy under a dest path from a given source ``` CopyierBuilder InitiateCopy(Path destination, FileSystem sourceFS, Path source) ``` which you then set ops on to build up the copy ``` CopyOperationBuilder builder = copier.copy() setSource(sourceStatus) // or a path setDest(destPath) must("fs.option.overwrite", true) ``` where you could set up things like overwrite, FS permissions, .. And then kick off the copy ``` CompletableFuture outcome = builder.build() ``` and await that future. If you are doing many copies, you'd put them in a set of futures and await them all to complete, in whatever order the store chooses. So you don't have to guess what is the optimal order (though a bit of randomisation is always handy) Like I said: a followup. What's interesting with that is you could implement a default one which does exec client side in a thread pool. Slower than a rename, but viable This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #988: HADOOP-16376. ABFS: Override access() to no-op.
steveloughran commented on issue #988: HADOOP-16376. ABFS: Override access() to no-op. URL: https://github.com/apache/hadoop/pull/988#issuecomment-541012265 missed this +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x
hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x URL: https://github.com/apache/hadoop/pull/763#issuecomment-541012123 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 44 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for branch | | +1 | mvninstall | 1254 | trunk passed | | +1 | compile | 1292 | trunk passed | | +1 | checkstyle | 196 | trunk passed | | +1 | mvnsite | 426 | trunk passed | | +1 | shadedclient | 1580 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 423 | trunk passed | | 0 | spotbugs | 132 | Used deprecated FindBugs config; considering switching to SpotBugs. | | 0 | findbugs | 29 | branch/hadoop-project no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | 0 | mvndep | 39 | Maven dependency ordering for patch | | -1 | mvninstall | 28 | hadoop-yarn-common in the patch failed. | | -1 | compile | 378 | root in the patch failed. | | -1 | javac | 378 | root in the patch failed. | | -0 | checkstyle | 177 | root: The patch generated 3 new + 605 unchanged - 18 fixed = 608 total (was 623) | | -1 | mvnsite | 30 | hadoop-yarn-common in the patch failed. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 15 | The patch has no ill-formed XML file. | | -1 | shadedclient | 235 | patch has errors when building and testing our client artifacts. | | -1 | javadoc | 49 | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 2 new + 4190 unchanged - 0 fixed = 4192 total (was 4190) | | 0 | findbugs | 17 | hadoop-project has no data from findbugs | | -1 | findbugs | 48 | hadoop-common-project/hadoop-kms generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 | findbugs | 28 | hadoop-yarn-common in the patch failed. | ||| _ Other Tests _ | | +1 | unit | 16 | hadoop-project in the patch passed. | | +1 | unit | 585 | hadoop-common in the patch passed. | | +1 | unit | 215 | hadoop-kms in the patch passed. | | -1 | unit | 1110 | hadoop-hdfs in the patch failed. | | -1 | unit | 303 | hadoop-hdfs-httpfs in the patch failed. | | -1 | unit | 1354 | hadoop-hdfs-rbf in the patch failed. | | -1 | unit | 31 | hadoop-yarn-common in the patch failed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 11609 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-common-project/hadoop-kms | | | Dead store to idx in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:[line 180] | | Failed junit tests | hadoop.hdfs.TestFileAppend4 | | | hadoop.hdfs.TestErasureCodingExerciseAPIs | | | hadoop.hdfs.TestDFSStripedOutputStream | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.fs.http.server.TestHttpFSServer | | | hadoop.fs.contract.router.TestRouterHDFSContractRename | | | hadoop.fs.contract.router.TestRouterHDFSContractAppendSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractConcatSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractSeek | | | hadoop.fs.contract.router.TestRouterHDFSContractAppend | | | hadoop.fs.contract.router.TestRouterHDFSContractGetFileStatusSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractGetFileStatus | | | hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-763/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/763 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux d0ba9cac888e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f267917 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-763/11/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-763/11/artifact/out/patch-compile-root.txt | | javac |
[GitHub] [hadoop] hadoop-yetus commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode
hadoop-yetus commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode URL: https://github.com/apache/hadoop/pull/1431#issuecomment-541011439 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 220 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 29 new or modified test files. | ||| _ HDDS-1564 Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for branch | | -1 | mvninstall | 44 | hadoop-hdds in HDDS-1564 failed. | | -1 | mvninstall | 48 | hadoop-ozone in HDDS-1564 failed. | | -1 | compile | 21 | hadoop-hdds in HDDS-1564 failed. | | -1 | compile | 15 | hadoop-ozone in HDDS-1564 failed. | | +1 | checkstyle | 79 | HDDS-1564 passed | | +1 | mvnsite | 0 | HDDS-1564 passed | | +1 | shadedclient | | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in HDDS-1564 failed. | | -1 | javadoc | 19 | hadoop-ozone in HDDS-1564 failed. | | 0 | spotbugs | 1211 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 35 | hadoop-hdds in HDDS-1564 failed. | | -1 | findbugs | 21 | hadoop-ozone in HDDS-1564 failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 32 | Maven dependency ordering for patch | | -1 | mvninstall | 39 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 45 | hadoop-ozone in the patch failed. | | -1 | compile | 27 | hadoop-hdds in the patch failed. | | -1 | compile | 19 | hadoop-ozone in the patch failed. | | -1 | javac | 27 | hadoop-hdds in the patch failed. | | -1 | javac | 19 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 33 | hadoop-hdds: The patch generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) | | +1 | checkstyle | 35 | The patch passed checkstyle in hadoop-ozone | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 931 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 25 | hadoop-hdds in the patch failed. | | -1 | javadoc | 22 | hadoop-ozone in the patch failed. | | -1 | findbugs | 32 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 24 | hadoop-hdds in the patch failed. | | -1 | unit | 22 | hadoop-ozone in the patch failed. | | +1 | asflicense | 30 | The patch does not generate ASF License warnings. | | | | 3117 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1431 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 4e301a09027e 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | HDDS-1564 / 7b5a5fe | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/23/artifact/out/patch-compile-hadoop-hdds.txt |
[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage - as a file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949344#comment-16949344 ] Hadoop QA commented on HADOOP-16492: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 39 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 31s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 22s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 22s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-cloud-storage-project: The patch generated 1901 new + 0 unchanged - 0 fixed = 1901 total (was 0) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 21s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 8s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-cloud-storage-project {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 23s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. {color} | | {color:green}+1{color} |
[jira] [Commented] (HADOOP-16349) DynamoDBMetadataStore.getVersionMarkerItem() to log at info/warn on retry
[ https://issues.apache.org/jira/browse/HADOOP-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949341#comment-16949341 ] Gabor Bota commented on HADOOP-16349: - https://github.com/apache/hadoop/pull/1576 Fixed in HADOOP-16540. +1 on #1576 from [~ste...@apache.org]. Committing. Thanks. > DynamoDBMetadataStore.getVersionMarkerItem() to log at info/warn on retry > - > > Key: HADOOP-16349 > URL: https://issues.apache.org/jira/browse/HADOOP-16349 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > > If you delete the version marker from a S3Guard table, it appears to hang for > 5 minutes. > Only if you restart and turn logging to debug do you see that > {{DynamoDBMetadataStore.getVersionMarkerItem()}} is sleeping and retrying. > # log at warn > # add entry to troubleshooting doc on the topic > The cause of the failure can be any of > * table being inited elsewhere: expectation, fast recovery > * it's not a S3Guard table: it won't recover > * it's a S3Guard table without a version marker: it won't recover. > + consider having a shorter retry lifespan, though if it adds a new config > point I'm a bit reluctant. For s3guard bucket-info it would make sense to > change the policy to be aggressively short lived -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16349) DynamoDBMetadataStore.getVersionMarkerItem() to log at info/warn on retry
[ https://issues.apache.org/jira/browse/HADOOP-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota resolved HADOOP-16349. - Resolution: Fixed > DynamoDBMetadataStore.getVersionMarkerItem() to log at info/warn on retry > - > > Key: HADOOP-16349 > URL: https://issues.apache.org/jira/browse/HADOOP-16349 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > > If you delete the version marker from a S3Guard table, it appears to hang for > 5 minutes. > Only if you restart and turn logging to debug do you see that > {{DynamoDBMetadataStore.getVersionMarkerItem()}} is sleeping and retrying. > # log at warn > # add entry to troubleshooting doc on the topic > The cause of the failure can be any of > * table being inited elsewhere: expectation, fast recovery > * it's not a S3Guard table: it won't recover > * it's a S3Guard table without a version marker: it won't recover. > + consider having a shorter retry lifespan, though if it adds a new config > point I'm a bit reluctant. For s3guard bucket-info it would make sense to > change the policy to be aggressively short lived -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16520) Race condition in DDB table init and waiting threads
[ https://issues.apache.org/jira/browse/HADOOP-16520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota resolved HADOOP-16520. - Resolution: Fixed > Race condition in DDB table init and waiting threads > > > Key: HADOOP-16520 > URL: https://issues.apache.org/jira/browse/HADOOP-16520 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > > s3guard threads waiting for table creation completion can be scheduled before > the creating thread, look for the version marker and then fail. > window will be sleep times in AWS SDK Table.waitForActive(); -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16520) Race condition in DDB table init and waiting threads
[ https://issues.apache.org/jira/browse/HADOOP-16520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949339#comment-16949339 ] Gabor Bota commented on HADOOP-16520: - +1 on #1576 from [~ste...@apache.org]. Committing. Thanks. > Race condition in DDB table init and waiting threads > > > Key: HADOOP-16520 > URL: https://issues.apache.org/jira/browse/HADOOP-16520 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > > s3guard threads waiting for table creation completion can be scheduled before > the creating thread, look for the version marker and then fail. > window will be sleep times in AWS SDK Table.waitForActive(); -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg merged pull request #1576: HADOOP-16520 dynamodb ms version race refactor.
bgaborg merged pull request #1576: HADOOP-16520 dynamodb ms version race refactor. URL: https://github.com/apache/hadoop/pull/1576 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16520) Race condition in DDB table init and waiting threads
[ https://issues.apache.org/jira/browse/HADOOP-16520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-16520: Summary: Race condition in DDB table init and waiting threads (was: race condition in DDB table init and waiting threads) > Race condition in DDB table init and waiting threads > > > Key: HADOOP-16520 > URL: https://issues.apache.org/jira/browse/HADOOP-16520 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Major > > s3guard threads waiting for table creation completion can be scheduled before > the creating thread, look for the version marker and then fail. > window will be sleep times in AWS SDK Table.waitForActive(); -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg commented on issue #1576: HADOOP-16520 dynamodb ms version race refactor.
bgaborg commented on issue #1576: HADOOP-16520 dynamodb ms version race refactor. URL: https://github.com/apache/hadoop/pull/1576#issuecomment-541002758 Thanks. I removed the leftover line from the docs, and merge this change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1576: HADOOP-16520 dynamodb ms version race refactor.
steveloughran commented on a change in pull request #1576: HADOOP-16520 dynamodb ms version race refactor. URL: https://github.com/apache/hadoop/pull/1576#discussion_r333903245 ## File path: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md ## @@ -974,6 +1001,8 @@ in an incompatible manner. The version marker in tables exists to support such an option if it ever becomes necessary, by ensuring that all S3Guard client can recognise any version mismatch. +* Table versionin Review comment: looks like a leftover line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hddong commented on a change in pull request #1614: HADOOP-16615. Add password check for credential provider
hddong commented on a change in pull request #1614: HADOOP-16615. Add password check for credential provider URL: https://github.com/apache/hadoop/pull/1614#discussion_r333893786 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredShell.java ## @@ -174,11 +174,20 @@ public void testPromptForCredential() throws Exception { assertEquals(0, rc); assertTrue(outContent.toString().contains("credential1 has been successfully " + "created.")); - -String[] args2 = {"delete", "credential1", "-f", "-provider", -jceksProvider}; + +String[] args2 = {"check", "credential1", "-provider", + jceksProvider}; +ArrayList password = new ArrayList(); +password.add("p@ssw0rd"); +shell.setPasswordReader(new MockPasswordReader(password)); rc = shell.run(args2); assertEquals(0, rc); +assertTrue(outContent.toString().contains("Password match success for credential1.")); Review comment: An error check added. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg commented on issue #1576: HADOOP-16520 dynamodb ms version race refactor.
bgaborg commented on issue #1576: HADOOP-16520 dynamodb ms version race refactor. URL: https://github.com/apache/hadoop/pull/1576#issuecomment-540978947 last commit: - renamed handler to manager - S3GUARD_DDB_THROTTLE_RETRY_INTERVAL_DEFAULT is left at 100ms but S3GUARD_DDB_MAX_RETRIES is set to 10 now instead of 20. - docs added This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg commented on issue #1576: HADOOP-16520 dynamodb ms version race refactor.
bgaborg commented on issue #1576: HADOOP-16520 dynamodb ms version race refactor. URL: https://github.com/apache/hadoop/pull/1576#issuecomment-540978659 @steveloughran can you take a look at this? thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hddong commented on a change in pull request #1614: HADOOP-16615. Add password check for credential provider
hddong commented on a change in pull request #1614: HADOOP-16615. Add password check for credential provider URL: https://github.com/apache/hadoop/pull/1614#discussion_r333891017 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java ## @@ -66,6 +67,7 @@ * * % hadoop credential create alias [-provider providerPath] * % hadoop credential list [-provider providerPath] + * % hadoop credential check alias [-provider providerPath] Review comment: > should the full usage option set be listed here? Just to keep the format uniform and `[-value]` is for test, may not need here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps
[ https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949272#comment-16949272 ] Ayush Saxena commented on HADOOP-16638: --- Thanx [~belugabehr] for the patch. Seems fair enough. Just to be double sure. Did you try this one too both ways. > Use Relative URLs in Hadoop KMS WebApps > --- > > Key: HADOOP-16638 > URL: https://issues.apache.org/jira/browse/HADOOP-16638 > Project: Hadoop Common > Issue Type: Sub-task > Components: kms >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16638.1.patch, HADOOP-16638.2.patch, > HADOOP-16638.3.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16492) Support HuaweiCloud Object Storage - as a file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhongjun updated HADOOP-16492: -- Attachment: HADOOP-16492.002.patch > Support HuaweiCloud Object Storage - as a file system in Hadoop > --- > > Key: HADOOP-16492 > URL: https://issues.apache.org/jira/browse/HADOOP-16492 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 3.3.0 >Reporter: zhongjun >Priority: Major > Attachments: HADOOP-16492.001.patch, HADOOP-16492.002.patch, > huaweicloud-obs-integrate.pdf > > > Added support for HuaweiCloud > OBS([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from OBS without any code change. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…
hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests… URL: https://github.com/apache/hadoop/pull/1528#issuecomment-540945283 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 81 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for branch | | -1 | mvninstall | 39 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 42 | hadoop-ozone in trunk failed. | | -1 | compile | 18 | hadoop-hdds in trunk failed. | | -1 | compile | 14 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 61 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 931 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in trunk failed. | | -1 | javadoc | 16 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1019 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 31 | hadoop-hdds in trunk failed. | | -1 | findbugs | 17 | hadoop-ozone in trunk failed. | | -0 | patch | 1048 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 24 | Maven dependency ordering for patch | | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. | | -1 | compile | 21 | hadoop-hdds in the patch failed. | | -1 | compile | 15 | hadoop-ozone in the patch failed. | | -1 | javac | 21 | hadoop-hdds in the patch failed. | | -1 | javac | 15 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 52 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 796 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 19 | hadoop-hdds in the patch failed. | | -1 | javadoc | 17 | hadoop-ozone in the patch failed. | | -1 | findbugs | 28 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 25 | hadoop-hdds in the patch failed. | | -1 | unit | 23 | hadoop-ozone in the patch failed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 2537 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1528 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f625ba9c0320 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f267917 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/15/artifact/out/patch-compile-hadoop-hdds.txt | | compile |
[jira] [Resolved] (HADOOP-16648) HDFS Native Client does not build correctly
[ https://issues.apache.org/jira/browse/HADOOP-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan resolved HADOOP-16648. --- Resolution: Duplicate Marking this as dup of HDFS-14900 > HDFS Native Client does not build correctly > --- > > Key: HADOOP-16648 > URL: https://issues.apache.org/jira/browse/HADOOP-16648 > Project: Hadoop Common > Issue Type: Sub-task > Components: native >Affects Versions: 3.3.0 >Reporter: Rajesh Balamohan >Priority: Blocker > > Builds are failing in PR with following exception in native client. > {noformat} > [WARNING] make[2]: Leaving directory > '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target' > [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles > 2 3 4 5 6 7 8 9 10 11 > [WARNING] [ 28%] Built target common_obj > [WARNING] make[2]: Leaving directory > '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target' > [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles > 31 > [WARNING] [ 28%] Built target gmock_main_obj > [WARNING] make[1]: Leaving directory > '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target' > [WARNING] Makefile:127: recipe for target 'all' failed > [WARNING] make[2]: *** No rule to make target > '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND', > needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. > Stop. > [WARNING] make[1]: *** > [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2 > [WARNING] make[1]: *** Waiting for unfinished jobs > [WARNING] make: *** [all] Error 2 > [INFO] > > [INFO] Reactor Summary: > [INFO] > [INFO] Apache Hadoop Main . SUCCESS [ 0.301 > s] > [INFO] Apache Hadoop Build Tools .. SUCCESS [ 1.348 > s] > [INFO] Apache Hadoop Project POM .. SUCCESS [ 0.501 > s] > [INFO] Apache Hadoop Annotations .. SUCCESS [ 1.391 > s] > [INFO] Apache Hadoop Project Dist POM . SUCCESS [ 0.115 > s] > [INFO] Apache Hadoop Assemblies ... SUCCESS [ 0.168 > s] > [INFO] Apache Hadoop Maven Plugins SUCCESS [ 4.490 > s] > [INFO] Apache Hadoop MiniKDC .. SUCCESS [ 2.773 > s] > [INFO] Apache Hadoop Auth . SUCCESS [ 7.922 > s] > [INFO] Apache Hadoop Auth Examples SUCCESS [ 1.381 > s] > [INFO] Apache Hadoop Common ... SUCCESS [ 34.562 > s] > [INFO] Apache Hadoop NFS .. SUCCESS [ 5.583 > s] > [INFO] Apache Hadoop KMS .. SUCCESS [ 5.931 > s] > [INFO] Apache Hadoop Registry . SUCCESS [ 5.816 > s] > [INFO] Apache Hadoop Common Project ... SUCCESS [ 0.056 > s] > [INFO] Apache Hadoop HDFS Client .. SUCCESS [ 27.104 > s] > [INFO] Apache Hadoop HDFS . SUCCESS [ 42.065 > s] > [INFO] Apache Hadoop HDFS Native Client ... FAILURE [ 19.349 > s] > {noformat} > Creating this ticket, as couple of pull requests had the same issue. > e.g > https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/patch-compile-root.txt > https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/1/artifact/out/patch-compile-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16648) HDFS Native Client does not build correctly
[ https://issues.apache.org/jira/browse/HADOOP-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949175#comment-16949175 ] Rajesh Balamohan commented on HADOOP-16648: --- Closing this ticket as HDFS-14900 fixes the issue. Thanks [~ayushtkn], [~ste...@apache.org] > HDFS Native Client does not build correctly > --- > > Key: HADOOP-16648 > URL: https://issues.apache.org/jira/browse/HADOOP-16648 > Project: Hadoop Common > Issue Type: Sub-task > Components: native >Affects Versions: 3.3.0 >Reporter: Rajesh Balamohan >Priority: Blocker > > Builds are failing in PR with following exception in native client. > {noformat} > [WARNING] make[2]: Leaving directory > '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target' > [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles > 2 3 4 5 6 7 8 9 10 11 > [WARNING] [ 28%] Built target common_obj > [WARNING] make[2]: Leaving directory > '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target' > [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles > 31 > [WARNING] [ 28%] Built target gmock_main_obj > [WARNING] make[1]: Leaving directory > '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target' > [WARNING] Makefile:127: recipe for target 'all' failed > [WARNING] make[2]: *** No rule to make target > '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND', > needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. > Stop. > [WARNING] make[1]: *** > [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2 > [WARNING] make[1]: *** Waiting for unfinished jobs > [WARNING] make: *** [all] Error 2 > [INFO] > > [INFO] Reactor Summary: > [INFO] > [INFO] Apache Hadoop Main . SUCCESS [ 0.301 > s] > [INFO] Apache Hadoop Build Tools .. SUCCESS [ 1.348 > s] > [INFO] Apache Hadoop Project POM .. SUCCESS [ 0.501 > s] > [INFO] Apache Hadoop Annotations .. SUCCESS [ 1.391 > s] > [INFO] Apache Hadoop Project Dist POM . SUCCESS [ 0.115 > s] > [INFO] Apache Hadoop Assemblies ... SUCCESS [ 0.168 > s] > [INFO] Apache Hadoop Maven Plugins SUCCESS [ 4.490 > s] > [INFO] Apache Hadoop MiniKDC .. SUCCESS [ 2.773 > s] > [INFO] Apache Hadoop Auth . SUCCESS [ 7.922 > s] > [INFO] Apache Hadoop Auth Examples SUCCESS [ 1.381 > s] > [INFO] Apache Hadoop Common ... SUCCESS [ 34.562 > s] > [INFO] Apache Hadoop NFS .. SUCCESS [ 5.583 > s] > [INFO] Apache Hadoop KMS .. SUCCESS [ 5.931 > s] > [INFO] Apache Hadoop Registry . SUCCESS [ 5.816 > s] > [INFO] Apache Hadoop Common Project ... SUCCESS [ 0.056 > s] > [INFO] Apache Hadoop HDFS Client .. SUCCESS [ 27.104 > s] > [INFO] Apache Hadoop HDFS . SUCCESS [ 42.065 > s] > [INFO] Apache Hadoop HDFS Native Client ... FAILURE [ 19.349 > s] > {noformat} > Creating this ticket, as couple of pull requests had the same issue. > e.g > https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/patch-compile-root.txt > https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/1/artifact/out/patch-compile-root.txt -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org