[jira] [Updated] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
[ https://issues.apache.org/jira/browse/HADOOP-17301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan updated HADOOP-17301: --- Status: Patch Available (was: Open) > ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back > > > Key: HADOOP-17301 > URL: https://issues.apache.org/jira/browse/HADOOP-17301 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Critical > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > When reads done by readahead buffers failed, the exceptions where dropped and > the failure was not getting reported to the calling app. > Jira HADOOP-16852: Report read-ahead error back > tried to handle the scenario by reporting the error back to calling app. But > the commit has introduced a bug which can lead to ReadBuffer being injected > into read completed queue twice. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache
[ https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212865#comment-17212865 ] Hadoop QA commented on HADOOP-17304: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 32s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 24s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 8s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 50s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 42s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 49s{color} | | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 28s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 28s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 55s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 55s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 19s{color} | | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || || | {color:green}+1{color} | {color:green} unit {color} | {color:gr
[GitHub] [hadoop] snvijaya commented on pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size
snvijaya commented on pull request #2368: URL: https://github.com/apache/hadoop/pull/2368#issuecomment-707518919 @mukund-thakur Thanks for your review. I have updated the PR with suggestions. Kindly request you to review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size
snvijaya commented on pull request #2368: URL: https://github.com/apache/hadoop/pull/2368#issuecomment-707518631 Test results from accounts on East US 2 region: ### NON-HNS: SharedKey: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 458, Failures: 0, Errors: 0, Skipped: 245 [INFO] Results: [INFO] [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 24 ### HNS: SharedKey: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 458, Failures: 0, Errors: 0, Skipped: 24 [INFO] Results: [INFO] [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 24 OAuth: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 458, Failures: 0, Errors: 0, Skipped: 66 [INFO] Results: [INFO] [WARNING] Tests run: 207, Failures: 0, Errors: 0, Skipped: 140 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size
snvijaya commented on a change in pull request #2368: URL: https://github.com/apache/hadoop/pull/2368#discussion_r503693637 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java ## @@ -223,16 +244,19 @@ private int readInternal(final long position, final byte[] b, final int offset, // queue read-aheads int numReadAheads = this.readAheadQueueDepth; - long nextSize; long nextOffset = position; + // First read to queue needs to be of readBufferSize and later Review comment: Couple of things here: 1. The earlier code allowed bufferSize to be configurable whereas ReadAhead buffer size was fixed. And each time loop is done, read issued was always for bufferSize which can lead to gaps/holes in the readAhead range done. 2. There is no validation for 4MB as a fixed size for readAhead is optimal for all sequential reads. Having a higher readAhead range for apps like DFSIO which are guaranteed sequential and doing higher readAhead ranges in background can be performant. In this PR, the bug in point 1 is fixed and also a provision to configure readAhead buffer size is provided. ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java ## @@ -37,10 +39,10 @@ private static final Logger LOGGER = LoggerFactory.getLogger(ReadBufferManager.class); private static final int NUM_BUFFERS = 16; - private static final int BLOCK_SIZE = 4 * 1024 * 1024; private static final int NUM_THREADS = 8; private static final int DEFAULT_THRESHOLD_AGE_MILLISECONDS = 3000; // have to see if 3 seconds is a good threshold + private static int blockSize = 4 * 1024 * 1024; Review comment: Done ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java ## @@ -464,4 +483,53 @@ int getCompletedReadListSize() { void callTryEvict() { tryEvict(); } + + @VisibleForTesting + void testResetReadBufferManager() { Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size
snvijaya commented on a change in pull request #2368: URL: https://github.com/apache/hadoop/pull/2368#discussion_r503691466 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java ## @@ -178,11 +195,15 @@ private int readOneBlock(final byte[] b, final int off, final int len) throws IO buffer = new byte[bufferSize]; } - // Enable readAhead when reading sequentially - if (-1 == fCursorAfterLastRead || fCursorAfterLastRead == fCursor || b.length >= bufferSize) { + if (alwaysReadBufferSize) { bytesRead = readInternal(fCursor, buffer, 0, bufferSize, false); Review comment: AlwaysReadBufferSize helped the IO pattern to match the Gen1 run. But to be performant readAhead had to be enabled. For the customer scenario explained in the JIRA , for the small row groups for an overall small parquet file size, reading whole buffer size along with readAhead bought good performance. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size
snvijaya commented on a change in pull request #2368: URL: https://github.com/apache/hadoop/pull/2368#discussion_r503689799 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java ## @@ -89,9 +91,24 @@ public AbfsInputStream( this.tolerateOobAppends = abfsInputStreamContext.isTolerateOobAppends(); this.eTag = eTag; this.readAheadEnabled = true; +this.alwaysReadBufferSize += abfsInputStreamContext.shouldReadBufferSizeAlways(); this.cachedSasToken = new CachedSASToken( abfsInputStreamContext.getSasTokenRenewPeriodForStreamsInSeconds()); this.streamStatistics = abfsInputStreamContext.getStreamStatistics(); +readAheadBlockSize = abfsInputStreamContext.getReadAheadBlockSize(); +if (this.bufferSize > readAheadBlockSize) { + LOG.debug( + "fs.azure.read.request.size[={}] is configured for higher size than " + + "fs.azure.read.readahead.blocksize[={}]. Auto-align " + + "readAhead block size to be same as readRequestSize.", + bufferSize, readAheadBlockSize); + readAheadBlockSize = this.bufferSize; +} + +// Propagate the config values to ReadBufferManager so that the first instance +// to initialize it get can set the readAheadBlockSize Review comment: Fixed ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java ## @@ -74,6 +74,9 @@ public static final String DEFAULT_FS_AZURE_APPEND_BLOB_DIRECTORIES = ""; public static final int DEFAULT_READ_AHEAD_QUEUE_DEPTH = -1; + public static final boolean DEFAULT_ALWAYS_READ_BUFFER_SIZE = false; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size
snvijaya commented on a change in pull request #2368: URL: https://github.com/apache/hadoop/pull/2368#discussion_r503689712 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java ## @@ -89,9 +91,24 @@ public AbfsInputStream( this.tolerateOobAppends = abfsInputStreamContext.isTolerateOobAppends(); this.eTag = eTag; this.readAheadEnabled = true; +this.alwaysReadBufferSize += abfsInputStreamContext.shouldReadBufferSizeAlways(); this.cachedSasToken = new CachedSASToken( abfsInputStreamContext.getSasTokenRenewPeriodForStreamsInSeconds()); this.streamStatistics = abfsInputStreamContext.getStreamStatistics(); +readAheadBlockSize = abfsInputStreamContext.getReadAheadBlockSize(); +if (this.bufferSize > readAheadBlockSize) { Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on pull request #2380: HDFS-15626 TestWebHDFS.testLargeDirectory failing
mukund-thakur commented on pull request #2380: URL: https://github.com/apache/hadoop/pull/2380#issuecomment-707507811 All the above tests failure seems unrelated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache
[ https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212822#comment-17212822 ] Xiaoqiao He commented on HADOOP-17304: -- Thanks [~xyao] for your updates. One nit comment, EnumSet {{INVALIDATE_CACHE_TYPES}} defined at class KMSACLs will be more reasonable?+1 for other changes. Thanks. > KMS ACL: Allow DeleteKey Operation to Invalidate Cache > -- > > Key: HADOOP-17304 > URL: https://issues.apache.org/jira/browse/HADOOP-17304 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17304.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > HADOOP-17208 send invalidate cache for key being deleted. The invalidate > cache operation itself requires ROLLOVER permission on the key. This ticket > is opened to fix the issue caught by TestKMS.testACLs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache
[ https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-17304: Status: Patch Available (was: Open) > KMS ACL: Allow DeleteKey Operation to Invalidate Cache > -- > > Key: HADOOP-17304 > URL: https://issues.apache.org/jira/browse/HADOOP-17304 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17304.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > HADOOP-17208 send invalidate cache for key being deleted. The invalidate > cache operation itself requires ROLLOVER permission on the key. This ticket > is opened to fix the issue caught by TestKMS.testACLs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache
[ https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-17304: Attachment: HADOOP-17304.001.patch > KMS ACL: Allow DeleteKey Operation to Invalidate Cache > -- > > Key: HADOOP-17304 > URL: https://issues.apache.org/jira/browse/HADOOP-17304 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17304.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > HADOOP-17208 send invalidate cache for key being deleted. The invalidate > cache operation itself requires ROLLOVER permission on the key. This ticket > is opened to fix the issue caught by TestKMS.testACLs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances
[ https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212819#comment-17212819 ] Xiaoyu Yao edited comment on HADOOP-17208 at 10/13/20, 3:57 AM: I agree. With HADOOP-17304, it will be needed to expose additional INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to validate this. Please help checking the PR there and the test is kept as-is without adding additional ACLs. was (Author: xyao): I agree. With HADOOP-17304, it will be needed to expose additional INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to validate this > LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all > KMSClientProvider instances > - > > Key: HADOOP-17208 > URL: https://issues.apache.org/jira/browse/HADOOP-17208 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.8.4 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Without invalidateCache, the deleted key may still exists in the servers' key > cache (CachingKeyProvider in KMSWebApp.java) where the delete key was not > hit. Client may still be able to access encrypted files by specifying to > connect to KMS instances with a cached version of the deleted key before the > cache entry (10 min by default) expired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances
[ https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212819#comment-17212819 ] Xiaoyu Yao edited comment on HADOOP-17208 at 10/13/20, 3:56 AM: I agree. With HADOOP-17304, it will be needed to expose additional INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to validate this was (Author: xyao): I agree. With HADOOP-17304, this will not be no need to expose additional INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to validate this > LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all > KMSClientProvider instances > - > > Key: HADOOP-17208 > URL: https://issues.apache.org/jira/browse/HADOOP-17208 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.8.4 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Without invalidateCache, the deleted key may still exists in the servers' key > cache (CachingKeyProvider in KMSWebApp.java) where the delete key was not > hit. Client may still be able to access encrypted files by specifying to > connect to KMS instances with a cached version of the deleted key before the > cache entry (10 min by default) expired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances
[ https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212819#comment-17212819 ] Xiaoyu Yao commented on HADOOP-17208: - I agree. With HADOOP-17304, this will not be no need to expose additional INVALIDATE_CACHE ACL for DELETE ops. The previous failed test can be used to validate this > LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all > KMSClientProvider instances > - > > Key: HADOOP-17208 > URL: https://issues.apache.org/jira/browse/HADOOP-17208 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.8.4 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Without invalidateCache, the deleted key may still exists in the servers' key > cache (CachingKeyProvider in KMSWebApp.java) where the delete key was not > hit. Client may still be able to access encrypted files by specifying to > connect to KMS instances with a cached version of the deleted key before the > cache entry (10 min by default) expired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
[ https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=499795&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499795 ] ASF GitHub Bot logged work on HADOOP-17301: --- Author: ASF GitHub Bot Created on: 13/Oct/20 03:52 Start Date: 13/Oct/20 03:52 Worklog Time Spent: 10m Work Description: snvijaya commented on pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#issuecomment-707468028 @steveloughran - Thanks for your review. I have updated this PR with the suggestions. Kindly request your review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499795) Time Spent: 2h 50m (was: 2h 40m) > ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back > > > Key: HADOOP-17301 > URL: https://issues.apache.org/jira/browse/HADOOP-17301 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Critical > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > When reads done by readahead buffers failed, the exceptions where dropped and > the failure was not getting reported to the calling app. > Jira HADOOP-16852: Report read-ahead error back > tried to handle the scenario by reporting the error back to calling app. But > the commit has introduced a bug which can lead to ReadBuffer being injected > into read completed queue twice. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
[ https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=499794&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499794 ] ASF GitHub Bot logged work on HADOOP-17301: --- Author: ASF GitHub Bot Created on: 13/Oct/20 03:51 Start Date: 13/Oct/20 03:51 Worklog Time Spent: 10m Work Description: snvijaya commented on pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#issuecomment-707467838 Test results from accounts on East US 2 region: ### NON-HNS: SharedKey: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 245 [INFO] Results: [INFO] [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24 ### HNS: SharedKey: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 24 [INFO] Results: [INFO] [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24 OAuth: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 66 [INFO] Results: [INFO] [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499794) Time Spent: 2h 40m (was: 2.5h) > ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back > > > Key: HADOOP-17301 > URL: https://issues.apache.org/jira/browse/HADOOP-17301 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Critical > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > When reads done by readahead buffers failed, the exceptions where dropped and > the failure was not getting reported to the calling app. > Jira HADOOP-16852: Report read-ahead error back > tried to handle the scenario by reporting the error back to calling app. But > the commit has introduced a bug which can lead to ReadBuffer being injected > into read completed queue twice. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
snvijaya commented on pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#issuecomment-707468028 @steveloughran - Thanks for your review. I have updated this PR with the suggestions. Kindly request your review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
snvijaya commented on pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#issuecomment-707467838 Test results from accounts on East US 2 region: ### NON-HNS: SharedKey: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 245 [INFO] Results: [INFO] [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24 ### HNS: SharedKey: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 24 [INFO] Results: [INFO] [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24 OAuth: [INFO] Results: [INFO] [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0 [INFO] Results: [INFO] [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 66 [INFO] Results: [INFO] [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
[ https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=499790&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499790 ] ASF GitHub Bot logged work on HADOOP-17301: --- Author: ASF GitHub Bot Created on: 13/Oct/20 03:42 Start Date: 13/Oct/20 03:42 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#issuecomment-707465367 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 46s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 7s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 58s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 55s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 4s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 0m 57s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 1m 27s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 71m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2369 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 61d0a03f4d37 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b3786d6c3cc | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/3/testReport/ | | Max. process+thread count | 430 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT
[GitHub] [hadoop] hadoop-yetus commented on pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
hadoop-yetus commented on pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#issuecomment-707465367 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 46s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 7s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 58s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 55s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 4s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 0m 57s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 1m 27s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 71m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2369 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 61d0a03f4d37 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b3786d6c3cc | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/3/testReport/ | | Max. process+thread count | 430 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org ---
[jira] [Commented] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212813#comment-17212813 ] Xiaoqiao He commented on HADOOP-17223: -- Get it, thanks [~weichiu] involve me here. +1 for branch-3.2.2 from my side. > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Blocker > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances
[ https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212811#comment-17212811 ] Xiaoqiao He commented on HADOOP-17208: -- Thanks [~xyao] for your comments, I am concerned if this is incompatible improvement. After changes, we expose INVALIDATE_CACHE acl to end user which does not need to care about before. Please correct me if something I missed. Thanks. > LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all > KMSClientProvider instances > - > > Key: HADOOP-17208 > URL: https://issues.apache.org/jira/browse/HADOOP-17208 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.8.4 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Without invalidateCache, the deleted key may still exists in the servers' key > cache (CachingKeyProvider in KMSWebApp.java) where the delete key was not > hit. Client may still be able to access encrypted files by specifying to > connect to KMS instances with a cached version of the deleted key before the > cache entry (10 min by default) expired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
snvijaya commented on a change in pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#discussion_r503631775 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java ## @@ -182,7 +183,39 @@ public void testFailedReadAhead() throws Exception { checkEvictedStatus(inputStream, 0, false); } + @Test + public void testFailedReadAheadEviction() throws Exception { +AbfsClient client = getMockAbfsClient(); +AbfsRestOperation successOp = getMockRestOp(); + ReadBufferManager.setThresholdAgeMilliseconds(INCREASED_READ_BUFFER_AGE_THRESHOLD); +// Stub : +// Read request leads to 3 readahead calls: Fail all 3 readahead-client.read() +// Actual read request fails with the failure in readahead thread +doThrow(new TimeoutException("Internal Server error")) +.when(client) +.read(any(String.class), any(Long.class), any(byte[].class), +any(Integer.class), any(Integer.class), any(String.class), +any(String.class)); + +AbfsInputStream inputStream = getAbfsInputStream(client, "testFailedReadAheadEviction.txt"); + +// Add a failed buffer to completed queue and set to no free buffers to read ahead. +ReadBuffer buff = new ReadBuffer(); +buff.setStatus( + org.apache.hadoop.fs.azurebfs.contracts.services.ReadBufferStatus.READ_FAILED); Review comment: Done ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java ## @@ -264,12 +297,24 @@ public void testSuccessfulReadAhead() throws Exception { any(String.class)); AbfsInputStream inputStream = getAbfsInputStream(client, "testSuccessfulReadAhead.txt"); +int beforeReadCompletedListSize = ReadBufferManager.getBufferManager().getCompletedReadListSize(); // First read request that triggers readAheads. inputStream.read(new byte[ONE_KB]); // Only the 3 readAhead threads should have triggered client.read verifyReadCallCount(client, 3); +int newAdditionsToCompletedRead = +ReadBufferManager.getBufferManager().getCompletedReadListSize() +- beforeReadCompletedListSize; +// read buffer might be dumped if the ReadBufferManager getblock preceded +// the action of buffer being picked for reading from readaheadqueue, so that +// inputstream can proceed with read and not be blocked on readahead thread +// availability. So the count of buffers in completedReadQueue for the stream +// can be same or lesser than the requests triggered to queue readahead. +assertTrue( Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
[ https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=499773&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499773 ] ASF GitHub Bot logged work on HADOOP-17301: --- Author: ASF GitHub Bot Created on: 13/Oct/20 02:31 Start Date: 13/Oct/20 02:31 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#discussion_r503631680 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java ## @@ -242,13 +243,29 @@ private synchronized boolean tryEvict() { } // next, try any old nodes that have not been consumed +// Failed read buffers (with buffer index=-1) that are older than +// thresholdAge should be cleaned up, but at the same time should not +// report successful eviction. +// Queue logic expects that a buffer is freed up for read ahead when +// eviction is successful, whereas a failed ReadBuffer would have released +// its buffer when its status was set to READ_FAILED. long earliestBirthday = Long.MAX_VALUE; +ArrayList oldFailedBuffers = new ArrayList<>(); for (ReadBuffer buf : completedReadList) { - if (buf.getTimeStamp() < earliestBirthday) { + if ((buf.getBufferindex() != -1) + && (buf.getTimeStamp() < earliestBirthday)) { nodeToEvict = buf; earliestBirthday = buf.getTimeStamp(); + } else if ((buf.getBufferindex() == -1) + && (currentTimeMillis() - buf.getTimeStamp()) > thresholdAgeMilliseconds) { Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499773) Time Spent: 2h (was: 1h 50m) > ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back > > > Key: HADOOP-17301 > URL: https://issues.apache.org/jira/browse/HADOOP-17301 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Critical > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > When reads done by readahead buffers failed, the exceptions where dropped and > the failure was not getting reported to the calling app. > Jira HADOOP-16852: Report read-ahead error back > tried to handle the scenario by reporting the error back to calling app. But > the commit has introduced a bug which can lead to ReadBuffer being injected > into read completed queue twice. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
[ https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=499774&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499774 ] ASF GitHub Bot logged work on HADOOP-17301: --- Author: ASF GitHub Bot Created on: 13/Oct/20 02:31 Start Date: 13/Oct/20 02:31 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#discussion_r503631736 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java ## @@ -464,4 +480,10 @@ int getCompletedReadListSize() { void callTryEvict() { tryEvict(); } + + @VisibleForTesting Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499774) Time Spent: 2h 10m (was: 2h) > ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back > > > Key: HADOOP-17301 > URL: https://issues.apache.org/jira/browse/HADOOP-17301 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Critical > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > When reads done by readahead buffers failed, the exceptions where dropped and > the failure was not getting reported to the calling app. > Jira HADOOP-16852: Report read-ahead error back > tried to handle the scenario by reporting the error back to calling app. But > the commit has introduced a bug which can lead to ReadBuffer being injected > into read completed queue twice. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
snvijaya commented on a change in pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#discussion_r503631736 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java ## @@ -464,4 +480,10 @@ int getCompletedReadListSize() { void callTryEvict() { tryEvict(); } + + @VisibleForTesting Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
[ https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=499775&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499775 ] ASF GitHub Bot logged work on HADOOP-17301: --- Author: ASF GitHub Bot Created on: 13/Oct/20 02:31 Start Date: 13/Oct/20 02:31 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#discussion_r503631775 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java ## @@ -182,7 +183,39 @@ public void testFailedReadAhead() throws Exception { checkEvictedStatus(inputStream, 0, false); } + @Test + public void testFailedReadAheadEviction() throws Exception { +AbfsClient client = getMockAbfsClient(); +AbfsRestOperation successOp = getMockRestOp(); + ReadBufferManager.setThresholdAgeMilliseconds(INCREASED_READ_BUFFER_AGE_THRESHOLD); +// Stub : +// Read request leads to 3 readahead calls: Fail all 3 readahead-client.read() +// Actual read request fails with the failure in readahead thread +doThrow(new TimeoutException("Internal Server error")) +.when(client) +.read(any(String.class), any(Long.class), any(byte[].class), +any(Integer.class), any(Integer.class), any(String.class), +any(String.class)); + +AbfsInputStream inputStream = getAbfsInputStream(client, "testFailedReadAheadEviction.txt"); + +// Add a failed buffer to completed queue and set to no free buffers to read ahead. +ReadBuffer buff = new ReadBuffer(); +buff.setStatus( + org.apache.hadoop.fs.azurebfs.contracts.services.ReadBufferStatus.READ_FAILED); Review comment: Done ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java ## @@ -264,12 +297,24 @@ public void testSuccessfulReadAhead() throws Exception { any(String.class)); AbfsInputStream inputStream = getAbfsInputStream(client, "testSuccessfulReadAhead.txt"); +int beforeReadCompletedListSize = ReadBufferManager.getBufferManager().getCompletedReadListSize(); // First read request that triggers readAheads. inputStream.read(new byte[ONE_KB]); // Only the 3 readAhead threads should have triggered client.read verifyReadCallCount(client, 3); +int newAdditionsToCompletedRead = +ReadBufferManager.getBufferManager().getCompletedReadListSize() +- beforeReadCompletedListSize; +// read buffer might be dumped if the ReadBufferManager getblock preceded +// the action of buffer being picked for reading from readaheadqueue, so that +// inputstream can proceed with read and not be blocked on readahead thread +// availability. So the count of buffers in completedReadQueue for the stream +// can be same or lesser than the requests triggered to queue readahead. +assertTrue( Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499775) Time Spent: 2h 20m (was: 2h 10m) > ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back > > > Key: HADOOP-17301 > URL: https://issues.apache.org/jira/browse/HADOOP-17301 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Critical > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > When reads done by readahead buffers failed, the exceptions where dropped and > the failure was not getting reported to the calling app. > Jira HADOOP-16852: Report read-ahead error back > tried to handle the scenario by reporting the error back to calling app. But > the commit has introduced a bug which can lead to ReadBuffer being injected > into read completed queue twice. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
[ https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=499772&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499772 ] ASF GitHub Bot logged work on HADOOP-17301: --- Author: ASF GitHub Bot Created on: 13/Oct/20 02:30 Start Date: 13/Oct/20 02:30 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#discussion_r503631485 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java ## @@ -464,4 +480,10 @@ int getCompletedReadListSize() { void callTryEvict() { tryEvict(); } + + @VisibleForTesting Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499772) Time Spent: 1h 50m (was: 1h 40m) > ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back > > > Key: HADOOP-17301 > URL: https://issues.apache.org/jira/browse/HADOOP-17301 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Critical > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > When reads done by readahead buffers failed, the exceptions where dropped and > the failure was not getting reported to the calling app. > Jira HADOOP-16852: Report read-ahead error back > tried to handle the scenario by reporting the error back to calling app. But > the commit has introduced a bug which can lead to ReadBuffer being injected > into read completed queue twice. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
snvijaya commented on a change in pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#discussion_r503631680 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java ## @@ -242,13 +243,29 @@ private synchronized boolean tryEvict() { } // next, try any old nodes that have not been consumed +// Failed read buffers (with buffer index=-1) that are older than +// thresholdAge should be cleaned up, but at the same time should not +// report successful eviction. +// Queue logic expects that a buffer is freed up for read ahead when +// eviction is successful, whereas a failed ReadBuffer would have released +// its buffer when its status was set to READ_FAILED. long earliestBirthday = Long.MAX_VALUE; +ArrayList oldFailedBuffers = new ArrayList<>(); for (ReadBuffer buf : completedReadList) { - if (buf.getTimeStamp() < earliestBirthday) { + if ((buf.getBufferindex() != -1) + && (buf.getTimeStamp() < earliestBirthday)) { nodeToEvict = buf; earliestBirthday = buf.getTimeStamp(); + } else if ((buf.getBufferindex() == -1) + && (currentTimeMillis() - buf.getTimeStamp()) > thresholdAgeMilliseconds) { Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
snvijaya commented on a change in pull request #2369: URL: https://github.com/apache/hadoop/pull/2369#discussion_r503631485 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java ## @@ -464,4 +480,10 @@ int getCompletedReadListSize() { void callTryEvict() { tryEvict(); } + + @VisibleForTesting Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17223: - Target Version/s: 3.2.2, 3.3.1, 3.4.0 (was: 3.4.0) Priority: Blocker (was: Major) Given there is a new CVE, I'm updating the priority to blocker, and add 3.2.2 as a target version since it is about to be released. fyi [~hexiaoqiao] > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Blocker > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16990) Update Mockserver
[ https://issues.apache.org/jira/browse/HADOOP-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16990: - Fix Version/s: 3.1.5 3.3.1 3.2.2 Resolution: Fixed Status: Resolved (was: Patch Available) I committed the patch to branch-3.3 and branch-3.2. Thanks [~aajisaka] and [~adoroszlai]! > Update Mockserver > - > > Key: HADOOP-16990 > URL: https://issues.apache.org/jira/browse/HADOOP-16990 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Attila Doroszlai >Priority: Major > Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5 > > Attachments: HADOOP-16990-branch-3.1.004.patch, > HADOOP-16990-branch-3.3.002.patch, HADOOP-16990.001.patch, > HDFS-15620-branch-3.3-addendum.patch > > > We are on Mockserver 3.9.2 which is more than 5 years old. Time to update. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17258) MagicS3GuardCommitter fails with `pendingset` already exists
[ https://issues.apache.org/jira/browse/HADOOP-17258?focusedWorklogId=499738&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499738 ] ASF GitHub Bot logged work on HADOOP-17258: --- Author: ASF GitHub Bot Created on: 12/Oct/20 23:47 Start Date: 12/Oct/20 23:47 Worklog Time Spent: 10m Work Description: dongjoon-hyun commented on pull request #2371: URL: https://github.com/apache/hadoop/pull/2371#issuecomment-707400186 Thank you, @steveloughran . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499738) Time Spent: 4h 40m (was: 4.5h) > MagicS3GuardCommitter fails with `pendingset` already exists > > > Key: HADOOP-17258 > URL: https://issues.apache.org/jira/browse/HADOOP-17258 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 4h 40m > Remaining Estimate: 0h > > In `trunk/branch-3.3/branch-3.2`, `MagicS3GuardCommitter.innerCommitTask` has > `false` at `pendingSet.save`. > {code} > try { > pendingSet.save(getDestFS(), taskOutcomePath, false); > } catch (IOException e) { > LOG.warn("Failed to save task commit data to {} ", > taskOutcomePath, e); > abortPendingUploads(context, pendingSet.getCommits(), true); > throw e; > } > {code} > And, it can cause a job failure like the following. > {code} > WARN TaskSetManager: Lost task 1562.1 in stage 1.0 (TID 1788, 100.92.11.63, > executor 26): org.apache.spark.SparkException: Task failed while writing rows. > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) > at org.apache.spark.scheduler.Task.run(Task.scala:123) > at > org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) > at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown > Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) > at java.base/java.lang.Thread.run(Unknown Source) > Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: > s3a://xxx/__magic/app-attempt-/task_20200911063607_0001_m_001562.pendingset > already exists > at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:761) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987) > at > org.apache.hadoop.util.JsonSerialization.save(JsonSerialization.java:269) > at > org.apache.hadoop.fs.s3a.commit.files.PendingSet.save(PendingSet.java:170) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.innerCommitTask(MagicS3GuardCommitter.java:220) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.commitTask(MagicS3GuardCommitter.java:165) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77) > at > org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:244) > at > org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242) > {code} >
[GitHub] [hadoop] dongjoon-hyun commented on pull request #2371: HADOOP-17258. Magic S3Guard Committer to overwrite existing pendingSet file on task commit
dongjoon-hyun commented on pull request #2371: URL: https://github.com/apache/hadoop/pull/2371#issuecomment-707400186 Thank you, @steveloughran . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1857: HADOOP-16878. Copy command in FileUtil to throw an exception if the source and destination is the same
hadoop-yetus commented on pull request #1857: URL: https://github.com/apache/hadoop/pull/1857#issuecomment-707363726 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 9s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 6m 5s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 18s | | trunk passed | | +1 :green_heart: | compile | 20m 57s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 17m 44s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 53s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 48s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 57s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 53s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 3m 15s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 27s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 0s | | the patch passed | | +1 :green_heart: | compile | 20m 20s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 20m 20s | | the patch passed | | +1 :green_heart: | compile | 17m 34s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 34s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 49s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 47s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 25s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 54s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 5m 45s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 10m 5s | | hadoop-common in the patch passed. | | -1 :x: | unit | 111m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1857/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 302m 42s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1857/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1857 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 089bdf33d796 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b92f72758bf | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1857/6/testReport/ | | Max. process+thread count | 28
[jira] [Work logged] (HADOOP-16878) Copy command in FileUtil to throw an exception if the source and destination is the same
[ https://issues.apache.org/jira/browse/HADOOP-16878?focusedWorklogId=499669&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499669 ] ASF GitHub Bot logged work on HADOOP-16878: --- Author: ASF GitHub Bot Created on: 12/Oct/20 21:50 Start Date: 12/Oct/20 21:50 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #1857: URL: https://github.com/apache/hadoop/pull/1857#issuecomment-707363726 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 9s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 6m 5s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 18s | | trunk passed | | +1 :green_heart: | compile | 20m 57s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 17m 44s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 53s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 48s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 57s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 53s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 3m 15s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 27s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 0s | | the patch passed | | +1 :green_heart: | compile | 20m 20s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 20m 20s | | the patch passed | | +1 :green_heart: | compile | 17m 34s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 34s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 49s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 47s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 25s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 54s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 5m 45s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 10m 5s | | hadoop-common in the patch passed. | | -1 :x: | unit | 111m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1857/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 302m 42s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1857/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1857 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 089bdf33d796 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision |
[jira] [Commented] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212701#comment-17212701 ] Hadoop QA commented on HADOOP-17223: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 37s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 1s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 44m 5s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 22s{color} | | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | || || || || {color:brown} Other Tests {color} || || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 9s{color} | | {color:black}{color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/96/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17223 | | JI
[GitHub] [hadoop] umamaheswararao merged pull request #2378: HDFS-15625: Namenode trashEmptier should not init ViewFs on startup
umamaheswararao merged pull request #2378: URL: https://github.com/apache/hadoop/pull/2378 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on pull request #2378: HDFS-15625: Namenode trashEmptier should not init ViewFs on startup
umamaheswararao commented on pull request #2378: URL: https://github.com/apache/hadoop/pull/2378#issuecomment-707352110 Failures are seems to be unrelated and it's due to OOM. Thanks @jojochuang for the review! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2242: HADOOP-17223 update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
hadoop-yetus commented on pull request #2242: URL: https://github.com/apache/hadoop/pull/2242#issuecomment-707343064 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 30s | | trunk passed | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 45m 14s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 15s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 18s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | _ Other Tests _ | | +1 :green_heart: | unit | 0m 17s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 66m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2242/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2242 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux bceb1cd3e308 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2e46ef9417e | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2242/2/testReport/ | | Max. process+thread count | 481 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2242/2/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?focusedWorklogId=499634&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499634 ] ASF GitHub Bot logged work on HADOOP-17223: --- Author: ASF GitHub Bot Created on: 12/Oct/20 20:59 Start Date: 12/Oct/20 20:59 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2242: URL: https://github.com/apache/hadoop/pull/2242#issuecomment-707343064 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 30s | | trunk passed | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 45m 14s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 15s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 18s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | _ Other Tests _ | | +1 :green_heart: | unit | 0m 17s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 66m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2242/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2242 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux bceb1cd3e308 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2e46ef9417e | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2242/2/testReport/ | | Max. process+thread count | 481 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2242/2/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log
[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache
[ https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17304: Labels: pull-request-available (was: ) > KMS ACL: Allow DeleteKey Operation to Invalidate Cache > -- > > Key: HADOOP-17304 > URL: https://issues.apache.org/jira/browse/HADOOP-17304 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > HADOOP-17208 send invalidate cache for key being deleted. The invalidate > cache operation itself requires ROLLOVER permission on the key. This ticket > is opened to fix the issue caught by TestKMS.testACLs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache
[ https://issues.apache.org/jira/browse/HADOOP-17304?focusedWorklogId=499630&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499630 ] ASF GitHub Bot logged work on HADOOP-17304: --- Author: ASF GitHub Bot Created on: 12/Oct/20 20:54 Start Date: 12/Oct/20 20:54 Worklog Time Spent: 10m Work Description: xiaoyuyao opened a new pull request #2381: URL: https://github.com/apache/hadoop/pull/2381 https://issues.apache.org/jira/browse/HADOOP-17304 Adding ACL check to allow both DeleteKey and RollOverKey to invalidate cache. The test case has been covered by TestKMS#testAcls. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499630) Remaining Estimate: 0h Time Spent: 10m > KMS ACL: Allow DeleteKey Operation to Invalidate Cache > -- > > Key: HADOOP-17304 > URL: https://issues.apache.org/jira/browse/HADOOP-17304 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > HADOOP-17208 send invalidate cache for key being deleted. The invalidate > cache operation itself requires ROLLOVER permission on the key. This ticket > is opened to fix the issue caught by TestKMS.testACLs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao opened a new pull request #2381: HADOOP-17304. KMS ACL: Allow DeleteKey Operation to Invalidate Cache.
xiaoyuyao opened a new pull request #2381: URL: https://github.com/apache/hadoop/pull/2381 https://issues.apache.org/jira/browse/HADOOP-17304 Adding ACL check to allow both DeleteKey and RollOverKey to invalidate cache. The test case has been covered by TestKMS#testAcls. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache
[ https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-17304: Description: HADOOP-17208 send invalidate cache for key being deleted. The invalidate cache operation itself requires ROLLOVER permission on the key. This ticket is opened to fix the issue caught by TestKMS.testACLs. (was: HADOOP-17208 send invalidate cache for key being deleted. The invalidate cache operation itself requires ROLLOVER permission on the key. This ticket is opened to fix the test. ) > KMS ACL: Allow DeleteKey Operation to Invalidate Cache > -- > > Key: HADOOP-17304 > URL: https://issues.apache.org/jira/browse/HADOOP-17304 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > > HADOOP-17208 send invalidate cache for key being deleted. The invalidate > cache operation itself requires ROLLOVER permission on the key. This ticket > is opened to fix the issue caught by TestKMS.testACLs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17304) KMS ACL: Allow DeleteKey Operation to Invalidate Cache
[ https://issues.apache.org/jira/browse/HADOOP-17304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-17304: Summary: KMS ACL: Allow DeleteKey Operation to Invalidate Cache (was: Fix TestKMS.testACLs) > KMS ACL: Allow DeleteKey Operation to Invalidate Cache > -- > > Key: HADOOP-17304 > URL: https://issues.apache.org/jira/browse/HADOOP-17304 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > > HADOOP-17208 send invalidate cache for key being deleted. The invalidate > cache operation itself requires ROLLOVER permission on the key. This ticket > is opened to fix the test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212657#comment-17212657 ] Pranav Bheda commented on HADOOP-17223: --- Here is the updated patch. [^HADOOP-17223.001.patch] > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranav Bheda updated HADOOP-17223: -- Comment: was deleted (was: Here is the updated patch. [^HADOOP-17223.001.patch]) > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranav Bheda updated HADOOP-17223: -- Attachment: (was: HADOOP-17223.001.patch) > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranav Bheda updated HADOOP-17223: -- Attachment: HADOOP-17223.001.patch > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212655#comment-17212655 ] Pranav Bheda commented on HADOOP-17223: --- Here is the updated patch. [^HADOOP-17223.001.patch] > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2370: HDFS-15614. Initialize snapshot trash root during NameNode startup if enabled
hadoop-yetus commented on pull request #2370: URL: https://github.com/apache/hadoop/pull/2370#issuecomment-707328835 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 3s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 0s | | trunk passed | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 1m 8s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 55s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 49s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 49s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 20s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 3m 10s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 7s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 8s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 3s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 1m 3s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 49s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 10s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 42s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 16s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 3m 18s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 108m 12s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2370/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +0 :ok: | asflicense | 0m 35s | | ASF License check generated no output? | | | | 197m 44s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.namenode.TestINodeFile | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshot | | | hadoop.hdfs.server.namenode.TestAddStripedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail | | | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions | | | hadoop.hdfs.server.namenode.ha.TestHASafeMode | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion | | | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA | | | hadoop.hdfs.server.namenode.TestCacheDirectives | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR | | | hadoop.hdfs.server.namenode.TestStripedINodeFile | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport | | | hadoop.hdfs.server.namenode.ha.TestObserverNode | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2370/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2370 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall
[jira] [Commented] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212651#comment-17212651 ] Hadoop QA commented on HADOOP-17223: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 15s{color} | | {color:red} HADOOP-17223 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-17223 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13013453/HADOOP-17223.001.patch | | Console output | https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/95/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranav Bheda updated HADOOP-17223: -- Attachment: HADOOP-17223.001.patch > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranav Bheda updated HADOOP-17223: -- Attachment: (was: HADOOP-17223.001.patch) > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2378: HDFS-15625: Namenode trashEmptier should not init ViewFs on startup
hadoop-yetus commented on pull request #2378: URL: https://github.com/apache/hadoop/pull/2378#issuecomment-707320489 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 43s | | trunk passed | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 33s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 52s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 23s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 3m 0s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 59s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 3s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 1m 3s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 42s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 9s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 0s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 2m 59s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 108m 27s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2378/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 192m 6s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks | | | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.namenode.TestStripedINodeFile | | | hadoop.hdfs.server.namenode.TestFSImage | | | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration | | | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.server.namenode.TestAuditLogs | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | | | hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestMultipleNNPortQOP | | | hadoop.hdfs.server.namenode.TestCacheDirectives | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSn
[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics
[ https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=499609&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499609 ] ASF GitHub Bot logged work on HADOOP-17271: --- Author: ASF GitHub Bot Created on: 12/Oct/20 20:04 Start Date: 12/Oct/20 20:04 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2324: URL: https://github.com/apache/hadoop/pull/2324#issuecomment-707319515 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 4s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 43 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 6m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 17s | | trunk passed | | +1 :green_heart: | compile | 19m 47s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 58s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 53s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 51s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 18s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 52s | | trunk passed | | -0 :warning: | patch | 1m 38s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | | the patch passed | | +1 :green_heart: | compile | 18m 57s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 18m 57s | | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 1 fixed = 2048 total (was 2049) | | +1 :green_heart: | compile | 17m 8s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 8s | | root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 1 fixed = 1941 total (was 1942) | | -0 :warning: | checkstyle | 2m 46s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/16/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 292) | | +1 :green_heart: | mvnsite | 3m 15s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 39s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 44s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 39s | | hadoop-common in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 30s | | hadoop-mapreduce-client-core in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 41s | | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | +1 :green_heart: | findbugs | 5m 55s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 10m 3s | | hadoop-common in the patch passed. |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics
hadoop-yetus commented on pull request #2324: URL: https://github.com/apache/hadoop/pull/2324#issuecomment-707319515 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 4s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 43 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 6m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 17s | | trunk passed | | +1 :green_heart: | compile | 19m 47s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 58s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 53s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 51s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 18s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 52s | | trunk passed | | -0 :warning: | patch | 1m 38s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | | the patch passed | | +1 :green_heart: | compile | 18m 57s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 18m 57s | | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 1 fixed = 2048 total (was 2049) | | +1 :green_heart: | compile | 17m 8s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 8s | | root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 1 fixed = 1941 total (was 1942) | | -0 :warning: | checkstyle | 2m 46s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/16/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 292) | | +1 :green_heart: | mvnsite | 3m 15s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 39s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 44s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 39s | | hadoop-common in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 30s | | hadoop-mapreduce-client-core in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 41s | | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | +1 :green_heart: | findbugs | 5m 55s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 10m 3s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 7m 0s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 1m 39s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 193m 37s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI
[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API
[ https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=499605&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499605 ] ASF GitHub Bot logged work on HADOOP-16830: --- Author: ASF GitHub Bot Created on: 12/Oct/20 19:53 Start Date: 12/Oct/20 19:53 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2323: URL: https://github.com/apache/hadoop/pull/2323#issuecomment-707314832 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 29m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 11 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 53s | | trunk passed | | +1 :green_heart: | compile | 22m 24s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 20m 48s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 36s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 43s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 2m 50s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 47s | | trunk passed | | -0 :warning: | patch | 3m 11s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 24m 10s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | -1 :x: | javac | 24m 10s | [/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt) | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 20 new + 2050 unchanged - 0 fixed = 2070 total (was 2050) | | -1 :x: | compile | 5m 14s | [/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt) | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -1 :x: | javac | 5m 14s | [/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt) | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -0 :warning: | checkstyle | 0m 36s | [/buildtool-patch-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/buildtool-patch-checkstyle-hadoop-common-project_hadoop-common.txt) | The patch fails to run checkstyle in hadoop-common | | -1 :x: | mvnsite | 0m 37s | [/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 0m 43s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 33s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 45s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | find
[GitHub] [hadoop] hadoop-yetus commented on pull request #2323: HADOOP-16830. Add public IOStatistics API.
hadoop-yetus commented on pull request #2323: URL: https://github.com/apache/hadoop/pull/2323#issuecomment-707314832 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 29m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 11 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 53s | | trunk passed | | +1 :green_heart: | compile | 22m 24s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 20m 48s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 36s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 43s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 2m 50s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 47s | | trunk passed | | -0 :warning: | patch | 3m 11s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 57s | | the patch passed | | +1 :green_heart: | compile | 24m 10s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | -1 :x: | javac | 24m 10s | [/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt) | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 20 new + 2050 unchanged - 0 fixed = 2070 total (was 2050) | | -1 :x: | compile | 5m 14s | [/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt) | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -1 :x: | javac | 5m 14s | [/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt) | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -0 :warning: | checkstyle | 0m 36s | [/buildtool-patch-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/buildtool-patch-checkstyle-hadoop-common-project_hadoop-common.txt) | The patch fails to run checkstyle in hadoop-common | | -1 :x: | mvnsite | 0m 37s | [/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 0m 43s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 33s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 45s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 2m 57s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 10m 52s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/10/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnin
[jira] [Commented] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
[ https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212634#comment-17212634 ] Pranav Bheda commented on HADOOP-17223: --- Thanks for pointing that out. I'll just updated my PR with the latest version of http-client. > update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13 > - > > Key: HADOOP-17223 > URL: https://issues.apache.org/jira/browse/HADOOP-17223 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Pranav Bheda >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17223.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Update the dependencies > * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12 > * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2380: HDFS-15626 TestWebHDFS.testLargeDirectory failing
hadoop-yetus commented on pull request #2380: URL: https://github.com/apache/hadoop/pull/2380#issuecomment-707310518 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 33m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 59s | | trunk passed | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 1m 8s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 39s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 18s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 3m 8s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 6s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 5s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 1m 5s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 39s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 9s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 49s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 15s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 3m 11s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 112m 33s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2380/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 234m 4s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSOutputStream | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2380/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2380 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 94e89841812d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b92f72758bf | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2380/1/testReport/ | | Max. process+thread count | 2735 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-h
[jira] [Created] (HADOOP-17304) Fix TestKMS.testACLs
Xiaoyu Yao created HADOOP-17304: --- Summary: Fix TestKMS.testACLs Key: HADOOP-17304 URL: https://issues.apache.org/jira/browse/HADOOP-17304 Project: Hadoop Common Issue Type: Improvement Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao HADOOP-17208 send invalidate cache for key being deleted. The invalidate cache operation itself requires ROLLOVER permission on the key. This ticket is opened to fix the test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances
[ https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212623#comment-17212623 ] Xiaoyu Yao commented on HADOOP-17208: - Good catch [~ayushtkn]. And thanks [~hexiaoqiao] for looking into this. The original design of the INVALIDATE_CACHE op is tied to ROLLOVER ACL. The test itself can be fixed by allowing "DELETE" user to have ROLLOVER just like SET_KEY_MATERIAL does. conf.set(KMSACLs.Type.ROLLOVER.getAclConfigKey(), KMSACLs.Type.ROLLOVER.toString() + ",SET_KEY_MATERIAL,DELETE"); It would be much clean if we can have a separate INVALIDATE_CACHE ACL type to differentiate INVALIDATE_CACHE other than ROLLOVER itself like SET_KEY_MATERIAL and DELETE. > LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all > KMSClientProvider instances > - > > Key: HADOOP-17208 > URL: https://issues.apache.org/jira/browse/HADOOP-17208 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.8.4 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Without invalidateCache, the deleted key may still exists in the servers' key > cache (CachingKeyProvider in KMSWebApp.java) where the delete key was not > hit. Client may still be able to access encrypted files by specifying to > connect to KMS instances with a cached version of the deleted key before the > cache entry (10 min by default) expired. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2380: HDFS-15626 TestWebHDFS.testLargeDirectory failing
hadoop-yetus commented on pull request #2380: URL: https://github.com/apache/hadoop/pull/2380#issuecomment-707300065 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 29m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 1m 10s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 24s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 2m 59s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 57s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 6s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | compile | 1m 3s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 1m 3s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 42s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 9s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 5s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 45s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 3m 1s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 96m 22s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2380/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 208m 14s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc | | | hadoop.hdfs.TestDecommissionWithBackoffMonitor | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestFileChecksum | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2380/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2380 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ba3cedb77c78 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b92f72758bf | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2380/2/testReport/ | | Max. process+thread count | 4205 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/ha
[jira] [Commented] (HADOOP-16990) Update Mockserver
[ https://issues.apache.org/jira/browse/HADOOP-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212577#comment-17212577 ] Wei-Chiu Chuang commented on HADOOP-16990: -- Ugh. I'm sorry. Committed into the 3.3 branch with the 3.1 patch. > Update Mockserver > - > > Key: HADOOP-16990 > URL: https://issues.apache.org/jira/browse/HADOOP-16990 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Attila Doroszlai >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-16990-branch-3.1.004.patch, > HADOOP-16990-branch-3.3.002.patch, HADOOP-16990.001.patch, > HDFS-15620-branch-3.3-addendum.patch > > > We are on Mockserver 3.9.2 which is more than 5 years old. Time to update. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics
[ https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=499571&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499571 ] ASF GitHub Bot logged work on HADOOP-17271: --- Author: ASF GitHub Bot Created on: 12/Oct/20 18:21 Start Date: 12/Oct/20 18:21 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2324: URL: https://github.com/apache/hadoop/pull/2324#issuecomment-707275981 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 3s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 43 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 6m 9s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 9s | | trunk passed | | +1 :green_heart: | compile | 19m 36s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 53s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 54s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 57s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 54s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 15s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 50s | | trunk passed | | -0 :warning: | patch | 1m 36s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | | the patch passed | | +1 :green_heart: | compile | 18m 49s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 18m 49s | | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 1 fixed = 2048 total (was 2049) | | +1 :green_heart: | compile | 16m 55s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 16m 55s | | root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 1 fixed = 1941 total (was 1942) | | -0 :warning: | checkstyle | 2m 48s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/15/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 292) | | +1 :green_heart: | mvnsite | 3m 15s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 26s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 49s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 36s | | hadoop-common in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 35s | | hadoop-mapreduce-client-core in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 43s | | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | +1 :green_heart: | findbugs | 5m 15s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 10m 16s | | hadoop-common in the patch passed. |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics
hadoop-yetus commented on pull request #2324: URL: https://github.com/apache/hadoop/pull/2324#issuecomment-707275981 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 3s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 43 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 6m 9s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 9s | | trunk passed | | +1 :green_heart: | compile | 19m 36s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 53s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 54s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 57s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 54s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 15s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 50s | | trunk passed | | -0 :warning: | patch | 1m 36s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | | the patch passed | | +1 :green_heart: | compile | 18m 49s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 18m 49s | | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 1 fixed = 2048 total (was 2049) | | +1 :green_heart: | compile | 16m 55s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 16m 55s | | root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 1 fixed = 1941 total (was 1942) | | -0 :warning: | checkstyle | 2m 48s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/15/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 292) | | +1 :green_heart: | mvnsite | 3m 15s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 26s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 49s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 36s | | hadoop-common in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 35s | | hadoop-mapreduce-client-core in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 43s | | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | +1 :green_heart: | findbugs | 5m 15s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 10m 16s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 7m 5s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 1m 45s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 193m 22s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI
[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty
[ https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=499533&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499533 ] ASF GitHub Bot logged work on HADOOP-17288: --- Author: ASF GitHub Bot Created on: 12/Oct/20 17:08 Start Date: 12/Oct/20 17:08 Worklog Time Spent: 10m Work Description: saintstack commented on a change in pull request #2342: URL: https://github.com/apache/hadoop/pull/2342#discussion_r503423920 ## File path: hadoop-project/pom.xml ## @@ -2198,7 +2227,7 @@ - true + false Review comment: This intentional? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499533) Time Spent: 2h (was: 1h 50m) > Use shaded guava from thirdparty > > > Key: HADOOP-17288 > URL: https://issues.apache.org/jira/browse/HADOOP-17288 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h > Remaining Estimate: 0h > > Use the shaded version of guava in hadoop-thirdparty -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saintstack commented on a change in pull request #2342: HADOOP-17288. Use shaded guava from thirdparty.
saintstack commented on a change in pull request #2342: URL: https://github.com/apache/hadoop/pull/2342#discussion_r503423920 ## File path: hadoop-project/pom.xml ## @@ -2198,7 +2227,7 @@ - true + false Review comment: This intentional? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212506#comment-17212506 ] Hadoop QA commented on HADOOP-16492: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 25s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 1s{color} | | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 21 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 10s{color} | | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 53s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 14s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 47s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 0s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 37s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 45s{color} | | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | | {color:blue} branch/hadoop-project no findbugs output file (findbugsXml.xml) {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | | {color:blue} branch/hadoop-tools/hadoop-tools-dist no findbugs output file (findbugsXml.xml) {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | | {color:blue} branch/hadoop-cloud-storage-project/hadoop-cloud-storage no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 11s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 11s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 36s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 36s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 20s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 48s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} xml {color} |
[GitHub] [hadoop] smengcl commented on a change in pull request #2370: HDFS-15614. Initialize snapshot trash root during NameNode startup if enabled
smengcl commented on a change in pull request #2370: URL: https://github.com/apache/hadoop/pull/2370#discussion_r503422423 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java ## @@ -781,6 +781,10 @@ protected void initialize(Configuration conf) throws IOException { } } +if (namesystem.getIsSnapshotTrashRootEnabled()) { Review comment: good idea This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty
[ https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=499524&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499524 ] ASF GitHub Bot logged work on HADOOP-17288: --- Author: ASF GitHub Bot Created on: 12/Oct/20 16:56 Start Date: 12/Oct/20 16:56 Worklog Time Spent: 10m Work Description: saintstack commented on a change in pull request #2342: URL: https://github.com/apache/hadoop/pull/2342#discussion_r503413775 ## File path: Jenkinsfile ## @@ -23,7 +23,7 @@ pipeline { options { buildDiscarder(logRotator(numToKeepStr: '5')) -timeout (time: 20, unit: 'HOURS') +timeout (time: 30, unit: 'HOURS') Review comment: This change intentional? How does it relate? ## File path: hadoop-common-project/hadoop-common/pom.xml ## @@ -49,6 +49,12 @@ hadoop-annotations compile + + org.apache.hadoop.thirdparty + hadoop-shaded-guava + compile Review comment: nit: no need of the compile scope since it default? ## File path: hadoop-common-project/hadoop-common/pom.xml ## @@ -49,6 +49,12 @@ hadoop-annotations compile + + org.apache.hadoop.thirdparty + hadoop-shaded-guava + compile + + Review comment: Helpful comment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499524) Time Spent: 1h 50m (was: 1h 40m) > Use shaded guava from thirdparty > > > Key: HADOOP-17288 > URL: https://issues.apache.org/jira/browse/HADOOP-17288 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > Use the shaded version of guava in hadoop-thirdparty -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saintstack commented on a change in pull request #2342: HADOOP-17288. Use shaded guava from thirdparty.
saintstack commented on a change in pull request #2342: URL: https://github.com/apache/hadoop/pull/2342#discussion_r503413775 ## File path: Jenkinsfile ## @@ -23,7 +23,7 @@ pipeline { options { buildDiscarder(logRotator(numToKeepStr: '5')) -timeout (time: 20, unit: 'HOURS') +timeout (time: 30, unit: 'HOURS') Review comment: This change intentional? How does it relate? ## File path: hadoop-common-project/hadoop-common/pom.xml ## @@ -49,6 +49,12 @@ hadoop-annotations compile + + org.apache.hadoop.thirdparty + hadoop-shaded-guava + compile Review comment: nit: no need of the compile scope since it default? ## File path: hadoop-common-project/hadoop-common/pom.xml ## @@ -49,6 +49,12 @@ hadoop-annotations compile + + org.apache.hadoop.thirdparty + hadoop-shaded-guava + compile + + Review comment: Helpful comment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2380: HDFS-15626 TestWebHDFS.testLargeDirectory failing
steveloughran commented on pull request #2380: URL: https://github.com/apache/hadoop/pull/2380#issuecomment-707234889 let's see what yetus says. Moved the issue to being an HDFS JIRA This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec
[ https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=499515&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499515 ] ASF GitHub Bot logged work on HADOOP-17125: --- Author: ASF GitHub Bot Created on: 12/Oct/20 16:51 Start Date: 12/Oct/20 16:51 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2297: URL: https://github.com/apache/hadoop/pull/2297#issuecomment-707232854 you needed to be listed in the project settings as someone with the right permissions. its done now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499515) Time Spent: 25h 50m (was: 25h 40m) > Using snappy-java in SnappyCodec > > > Key: HADOOP-17125 > URL: https://issues.apache.org/jira/browse/HADOOP-17125 > Project: Hadoop Common > Issue Type: New Feature > Components: common >Affects Versions: 3.3.0 >Reporter: DB Tsai >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Time Spent: 25h 50m > Remaining Estimate: 0h > > In Hadoop, we use native libs for snappy codec which has several > disadvantages: > * It requires native *libhadoop* and *libsnappy* to be installed in system > *LD_LIBRARY_PATH*, and they have to be installed separately on each node of > the clusters, container images, or local test environments which adds huge > complexities from deployment point of view. In some environments, it requires > compiling the natives from sources which is non-trivial. Also, this approach > is platform dependent; the binary may not work in different platform, so it > requires recompilation. > * It requires extra configuration of *java.library.path* to load the > natives, and it results higher application deployment and maintenance cost > for users. > Projects such as *Spark* and *Parquet* use > [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based > implementation. It contains native binaries for Linux, Mac, and IBM in jar > file, and it can automatically load the native binaries into JVM from jar > without any setup. If a native implementation can not be found for a > platform, it can fallback to pure-java implementation of snappy based on > [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec
steveloughran commented on pull request #2297: URL: https://github.com/apache/hadoop/pull/2297#issuecomment-707232854 you needed to be listed in the project settings as someone with the right permissions. its done now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17038) Support disabling buffered reads in ABFS positional reads
[ https://issues.apache.org/jira/browse/HADOOP-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HADOOP-17038: Description: Right now it will do a seek to the position , read and then seek back to the old position. (As per the impl in the super class) In HBase kind of workloads we rely mostly on short preads. (like 64 KB size by default). So would be ideal to support a pure pos read API which will not even keep the data in a buffer but will only read the required data as what is asked for by the caller. (Not reading ahead more data as per the read size config) Allow an optional boolean config to be specified while opening file for read using which buffered pread can be disabled. FutureDataInputStreamBuilder openFile(Path path) was: Right now it will do a seek to the position , read and then seek back to the old position. (As per the impl in the super class) In HBase kind of workloads we rely mostly on short preads. (like 64 KB size by default). So would be ideal to support a pure pos read API which will not even keep the data in a buffer but will only read the required data as what is asked for by the caller. (Not reading ahead more data as per the read size config) > Support disabling buffered reads in ABFS positional reads > - > > Key: HADOOP-17038 > URL: https://issues.apache.org/jira/browse/HADOOP-17038 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Labels: HBase, abfsactive, pull-request-available > Attachments: HBase Perf Test Report.xlsx, screenshot-1.png > > Time Spent: 40m > Remaining Estimate: 0h > > Right now it will do a seek to the position , read and then seek back to the > old position. (As per the impl in the super class) > In HBase kind of workloads we rely mostly on short preads. (like 64 KB size > by default). So would be ideal to support a pure pos read API which will not > even keep the data in a buffer but will only read the required data as what > is asked for by the caller. (Not reading ahead more data as per the read size > config) > Allow an optional boolean config to be specified while opening file for read > using which buffered pread can be disabled. > FutureDataInputStreamBuilder openFile(Path path) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec
viirya commented on pull request #2297: URL: https://github.com/apache/hadoop/pull/2297#issuecomment-707207886 @steveloughran Thank you! I tried to assign this ticket, but seems cannot do it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec
[ https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=499484&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499484 ] ASF GitHub Bot logged work on HADOOP-17125: --- Author: ASF GitHub Bot Created on: 12/Oct/20 16:02 Start Date: 12/Oct/20 16:02 Worklog Time Spent: 10m Work Description: viirya commented on pull request #2297: URL: https://github.com/apache/hadoop/pull/2297#issuecomment-707207886 @steveloughran Thank you! I tried to assign this ticket, but seems cannot do it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499484) Time Spent: 25h 40m (was: 25.5h) > Using snappy-java in SnappyCodec > > > Key: HADOOP-17125 > URL: https://issues.apache.org/jira/browse/HADOOP-17125 > Project: Hadoop Common > Issue Type: New Feature > Components: common >Affects Versions: 3.3.0 >Reporter: DB Tsai >Assignee: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Time Spent: 25h 40m > Remaining Estimate: 0h > > In Hadoop, we use native libs for snappy codec which has several > disadvantages: > * It requires native *libhadoop* and *libsnappy* to be installed in system > *LD_LIBRARY_PATH*, and they have to be installed separately on each node of > the clusters, container images, or local test environments which adds huge > complexities from deployment point of view. In some environments, it requires > compiling the natives from sources which is non-trivial. Also, this approach > is platform dependent; the binary may not work in different platform, so it > requires recompilation. > * It requires extra configuration of *java.library.path* to load the > natives, and it results higher application deployment and maintenance cost > for users. > Projects such as *Spark* and *Parquet* use > [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based > implementation. It contains native binaries for Linux, Mac, and IBM in jar > file, and it can automatically load the native binaries into JVM from jar > without any setup. If a native implementation can not be found for a > platform, it can fallback to pure-java implementation of snappy based on > [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17281?focusedWorklogId=499479&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499479 ] ASF GitHub Bot logged work on HADOOP-17281: --- Author: ASF GitHub Bot Created on: 12/Oct/20 15:47 Start Date: 12/Oct/20 15:47 Worklog Time Spent: 10m Work Description: mukund-thakur commented on pull request #2380: URL: https://github.com/apache/hadoop/pull/2380#issuecomment-707199365 CC @steveloughran . Please check. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499479) Time Spent: 3h 40m (was: 3.5h) > Implement FileSystem.listStatusIterator() in S3AFileSystem > -- > > Key: HADOOP-17281 > URL: https://issues.apache.org/jira/browse/HADOOP-17281 > Project: Hadoop Common > Issue Type: Task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Time Spent: 3h 40m > Remaining Estimate: 0h > > Currently S3AFileSystem only implements listStatus() api which returns an > array. Once we implement the listStatusIterator(), clients can benefit from > the async listing done recently > https://issues.apache.org/jira/browse/HADOOP-17074 by performing some tasks > on files while iterating them. > > CC [~stevel] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on pull request #2380: TestWebHDFS.testLargeDirectory faling because of HADOOP-17281
mukund-thakur commented on pull request #2380: URL: https://github.com/apache/hadoop/pull/2380#issuecomment-707199365 CC @steveloughran . Please check. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17281?focusedWorklogId=499478&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499478 ] ASF GitHub Bot logged work on HADOOP-17281: --- Author: ASF GitHub Bot Created on: 12/Oct/20 15:46 Start Date: 12/Oct/20 15:46 Worklog Time Spent: 10m Work Description: mukund-thakur opened a new pull request #2380: URL: https://github.com/apache/hadoop/pull/2380 Re-ran TestWebHDFS. All good. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499478) Time Spent: 3.5h (was: 3h 20m) > Implement FileSystem.listStatusIterator() in S3AFileSystem > -- > > Key: HADOOP-17281 > URL: https://issues.apache.org/jira/browse/HADOOP-17281 > Project: Hadoop Common > Issue Type: Task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Time Spent: 3.5h > Remaining Estimate: 0h > > Currently S3AFileSystem only implements listStatus() api which returns an > array. Once we implement the listStatusIterator(), clients can benefit from > the async listing done recently > https://issues.apache.org/jira/browse/HADOOP-17074 by performing some tasks > on files while iterating them. > > CC [~stevel] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur opened a new pull request #2380: TestWebHDFS.testLargeDirectory faling because of HADOOP-17281
mukund-thakur opened a new pull request #2380: URL: https://github.com/apache/hadoop/pull/2380 Re-ran TestWebHDFS. All good. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212453#comment-17212453 ] Mukund Thakur commented on HADOOP-17281: Sorry, we ran the dfs client test suite but not this. Created a ticket here https://issues.apache.org/jira/browse/HADOOP-17303 . I will upload a PR soon. > Implement FileSystem.listStatusIterator() in S3AFileSystem > -- > > Key: HADOOP-17281 > URL: https://issues.apache.org/jira/browse/HADOOP-17281 > Project: Hadoop Common > Issue Type: Task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > Currently S3AFileSystem only implements listStatus() api which returns an > array. Once we implement the listStatusIterator(), clients can benefit from > the async listing done recently > https://issues.apache.org/jira/browse/HADOOP-17074 by performing some tasks > on files while iterating them. > > CC [~stevel] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17303) TestWebHDFS.testLargeDirectory faling because of HADOOP-17281
Mukund Thakur created HADOOP-17303: -- Summary: TestWebHDFS.testLargeDirectory faling because of HADOOP-17281 Key: HADOOP-17303 URL: https://issues.apache.org/jira/browse/HADOOP-17303 Project: Hadoop Common Issue Type: Bug Components: fs Reporter: Mukund Thakur Assignee: Mukund Thakur -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails
[ https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212448#comment-17212448 ] Hadoop QA commented on HADOOP-16769: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 19s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 40s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 39s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 48s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 47s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 14s{color} | | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 11s{color} | | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 32s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 32s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 6s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 6s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 17s{color} | | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 52s{color} | | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 33s{color} | | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green}
[jira] [Work logged] (HADOOP-16878) Copy command in FileUtil to throw an exception if the source and destination is the same
[ https://issues.apache.org/jira/browse/HADOOP-16878?focusedWorklogId=499447&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499447 ] ASF GitHub Bot logged work on HADOOP-16878: --- Author: ASF GitHub Bot Created on: 12/Oct/20 15:01 Start Date: 12/Oct/20 15:01 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #1857: URL: https://github.com/apache/hadoop/pull/1857#issuecomment-707173426 @ayushtkn -think Gabor is just too busy with other things. Do you fancy doing the final fixup? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499447) Time Spent: 0.5h (was: 20m) > Copy command in FileUtil to throw an exception if the source and destination > is the same > > > Key: HADOOP-16878 > URL: https://issues.apache.org/jira/browse/HADOOP-16878 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > Labels: pull-request-available > Attachments: hdfsTest.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > We encountered an error during a test in our QE when the file destination and > source path were the same. This happened during an ADLS test, and there were > no meaningful error messages, so it was hard to find the root cause of the > failure. > The error we saw was that file size has changed during the copy operation. > The new file creation in the destination - which is the same as the source - > creates a file and sets the file length to zero. After this, getting the > source file will fail because the sile size changed during the operation. > I propose a solution to at least log in error level in the {{FileUtil}} if > the source and destination of the copy operation is the same, so debugging > issues like this will be easier in the future. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #1857: HADOOP-16878. Copy command in FileUtil to throw an exception if the source and destination is the same
steveloughran commented on pull request #1857: URL: https://github.com/apache/hadoop/pull/1857#issuecomment-707173426 @ayushtkn -think Gabor is just too busy with other things. Do you fancy doing the final fixup? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16878) Copy command in FileUtil to throw an exception if the source and destination is the same
[ https://issues.apache.org/jira/browse/HADOOP-16878?focusedWorklogId=499445&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499445 ] ASF GitHub Bot logged work on HADOOP-16878: --- Author: ASF GitHub Bot Created on: 12/Oct/20 14:58 Start Date: 12/Oct/20 14:58 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #1857: URL: https://github.com/apache/hadoop/pull/1857#issuecomment-590453321 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499445) Time Spent: 20m (was: 10m) > Copy command in FileUtil to throw an exception if the source and destination > is the same > > > Key: HADOOP-16878 > URL: https://issues.apache.org/jira/browse/HADOOP-16878 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > Labels: pull-request-available > Attachments: hdfsTest.patch > > Time Spent: 20m > Remaining Estimate: 0h > > We encountered an error during a test in our QE when the file destination and > source path were the same. This happened during an ADLS test, and there were > no meaningful error messages, so it was hard to find the root cause of the > failure. > The error we saw was that file size has changed during the copy operation. > The new file creation in the destination - which is the same as the source - > creates a file and sets the file length to zero. After this, getting the > source file will fail because the sile size changed during the operation. > I propose a solution to at least log in error level in the {{FileUtil}} if > the source and destination of the copy operation is the same, so debugging > issues like this will be easier in the future. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1857: HADOOP-16878. Copy command in FileUtil to throw an exception if the source and destination is the same
hadoop-yetus removed a comment on pull request #1857: URL: https://github.com/apache/hadoop/pull/1857#issuecomment-590453321 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics
[ https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=499437&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499437 ] ASF GitHub Bot logged work on HADOOP-17271: --- Author: ASF GitHub Bot Created on: 12/Oct/20 14:41 Start Date: 12/Oct/20 14:41 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #2324: URL: https://github.com/apache/hadoop/pull/2324#issuecomment-705731165 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 31m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 43 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 6m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 58s | | trunk passed | | +1 :green_heart: | compile | 19m 44s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 17m 15s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 8s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 57s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 16s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 51s | | trunk passed | | -0 :warning: | patch | 1m 37s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 58s | | the patch passed | | +1 :green_heart: | compile | 19m 17s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 19m 17s | | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 1 fixed = 2048 total (was 2049) | | +1 :green_heart: | compile | 17m 12s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 12s | | root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 1 fixed = 1941 total (was 1942) | | -0 :warning: | checkstyle | 2m 50s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/14/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 292) | | +1 :green_heart: | mvnsite | 3m 20s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 26s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 51s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 37s | | hadoop-common in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 37s | | hadoop-mapreduce-client-core in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 45s | | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | +1 :green_heart: | findbugs | 5m 17s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 40s | | hadoop-common in the patch pas
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics
hadoop-yetus removed a comment on pull request #2324: URL: https://github.com/apache/hadoop/pull/2324#issuecomment-705731165 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 31m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 43 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 6m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 58s | | trunk passed | | +1 :green_heart: | compile | 19m 44s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 17m 15s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 8s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 57s | | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 16s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 51s | | trunk passed | | -0 :warning: | patch | 1m 37s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 58s | | the patch passed | | +1 :green_heart: | compile | 19m 17s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 19m 17s | | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 1 fixed = 2048 total (was 2049) | | +1 :green_heart: | compile | 17m 12s | | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 12s | | root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 1 fixed = 1941 total (was 1942) | | -0 :warning: | checkstyle | 2m 50s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/14/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 292) | | +1 :green_heart: | mvnsite | 3m 20s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 26s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 51s | | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 37s | | hadoop-common in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 37s | | hadoop-mapreduce-client-core in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 45s | | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | +1 :green_heart: | findbugs | 5m 17s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 40s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 7m 3s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 1m 44s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 224m 48s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 S
[jira] [Updated] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhongjun updated HADOOP-16492: -- Attachment: HADOOP-16492.016.patch > Support HuaweiCloud Object Storage as a Hadoop Backend File System > -- > > Key: HADOOP-16492 > URL: https://issues.apache.org/jira/browse/HADOOP-16492 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 3.4.0 >Reporter: zhongjun >Priority: Major > Attachments: Difference Between OBSA and S3A.pdf, > HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, > HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, > HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, > HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, > HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, > HADOOP-16492.016.patch, OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf > > > Added support for HuaweiCloud OBS > ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, > just like what we do before for S3, ADLS, OSS, etc. With simple > configuration, Hadoop applications can read/write data from OBS without any > code change. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17258) MagicS3GuardCommitter fails with `pendingset` already exists
[ https://issues.apache.org/jira/browse/HADOOP-17258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17258: Fix Version/s: 3.3.1 > MagicS3GuardCommitter fails with `pendingset` already exists > > > Key: HADOOP-17258 > URL: https://issues.apache.org/jira/browse/HADOOP-17258 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 4.5h > Remaining Estimate: 0h > > In `trunk/branch-3.3/branch-3.2`, `MagicS3GuardCommitter.innerCommitTask` has > `false` at `pendingSet.save`. > {code} > try { > pendingSet.save(getDestFS(), taskOutcomePath, false); > } catch (IOException e) { > LOG.warn("Failed to save task commit data to {} ", > taskOutcomePath, e); > abortPendingUploads(context, pendingSet.getCommits(), true); > throw e; > } > {code} > And, it can cause a job failure like the following. > {code} > WARN TaskSetManager: Lost task 1562.1 in stage 1.0 (TID 1788, 100.92.11.63, > executor 26): org.apache.spark.SparkException: Task failed while writing rows. > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) > at org.apache.spark.scheduler.Task.run(Task.scala:123) > at > org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) > at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown > Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) > at java.base/java.lang.Thread.run(Unknown Source) > Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: > s3a://xxx/__magic/app-attempt-/task_20200911063607_0001_m_001562.pendingset > already exists > at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:761) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987) > at > org.apache.hadoop.util.JsonSerialization.save(JsonSerialization.java:269) > at > org.apache.hadoop.fs.s3a.commit.files.PendingSet.save(PendingSet.java:170) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.innerCommitTask(MagicS3GuardCommitter.java:220) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.commitTask(MagicS3GuardCommitter.java:165) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77) > at > org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:244) > at > org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242) > {code} > {code} > 20/09/11 07:44:38 ERROR TaskSetManager: Task 957.1 in stage 1.0 (TID 1412) > can not write to output file: > org.apache.hadoop.fs.FileAlreadyExistsException: > s3a://xxx/t/__magic/app-attempt-/task_20200911073922_0001_m_000957.pendingset > already exists; not retrying > {code} > The above happens in EKS with S3 environment and the job failure happens when > some executor containers are killed by K8s -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17258) MagicS3GuardCommitter fails with `pendingset` already exists
[ https://issues.apache.org/jira/browse/HADOOP-17258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212341#comment-17212341 ] Steve Loughran commented on HADOOP-17258: - merged to branch 3.3 & trunk -thanks! > MagicS3GuardCommitter fails with `pendingset` already exists > > > Key: HADOOP-17258 > URL: https://issues.apache.org/jira/browse/HADOOP-17258 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 4.5h > Remaining Estimate: 0h > > In `trunk/branch-3.3/branch-3.2`, `MagicS3GuardCommitter.innerCommitTask` has > `false` at `pendingSet.save`. > {code} > try { > pendingSet.save(getDestFS(), taskOutcomePath, false); > } catch (IOException e) { > LOG.warn("Failed to save task commit data to {} ", > taskOutcomePath, e); > abortPendingUploads(context, pendingSet.getCommits(), true); > throw e; > } > {code} > And, it can cause a job failure like the following. > {code} > WARN TaskSetManager: Lost task 1562.1 in stage 1.0 (TID 1788, 100.92.11.63, > executor 26): org.apache.spark.SparkException: Task failed while writing rows. > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) > at org.apache.spark.scheduler.Task.run(Task.scala:123) > at > org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) > at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown > Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) > at java.base/java.lang.Thread.run(Unknown Source) > Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: > s3a://xxx/__magic/app-attempt-/task_20200911063607_0001_m_001562.pendingset > already exists > at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:761) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987) > at > org.apache.hadoop.util.JsonSerialization.save(JsonSerialization.java:269) > at > org.apache.hadoop.fs.s3a.commit.files.PendingSet.save(PendingSet.java:170) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.innerCommitTask(MagicS3GuardCommitter.java:220) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.commitTask(MagicS3GuardCommitter.java:165) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77) > at > org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:244) > at > org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242) > {code} > {code} > 20/09/11 07:44:38 ERROR TaskSetManager: Task 957.1 in stage 1.0 (TID 1412) > can not write to output file: > org.apache.hadoop.fs.FileAlreadyExistsException: > s3a://xxx/t/__magic/app-attempt-/task_20200911073922_0001_m_000957.pendingset > already exists; not retrying > {code} > The above happens in EKS with S3 environment and the job failure happens when > some executor containers are killed by K8s -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17258) MagicS3GuardCommitter fails with `pendingset` already exists
[ https://issues.apache.org/jira/browse/HADOOP-17258?focusedWorklogId=499388&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-499388 ] ASF GitHub Bot logged work on HADOOP-17258: --- Author: ASF GitHub Bot Created on: 12/Oct/20 12:43 Start Date: 12/Oct/20 12:43 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2371: URL: https://github.com/apache/hadoop/pull/2371#issuecomment-707096741 +1, merged to 3.3+ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 499388) Time Spent: 4.5h (was: 4h 20m) > MagicS3GuardCommitter fails with `pendingset` already exists > > > Key: HADOOP-17258 > URL: https://issues.apache.org/jira/browse/HADOOP-17258 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > Time Spent: 4.5h > Remaining Estimate: 0h > > In `trunk/branch-3.3/branch-3.2`, `MagicS3GuardCommitter.innerCommitTask` has > `false` at `pendingSet.save`. > {code} > try { > pendingSet.save(getDestFS(), taskOutcomePath, false); > } catch (IOException e) { > LOG.warn("Failed to save task commit data to {} ", > taskOutcomePath, e); > abortPendingUploads(context, pendingSet.getCommits(), true); > throw e; > } > {code} > And, it can cause a job failure like the following. > {code} > WARN TaskSetManager: Lost task 1562.1 in stage 1.0 (TID 1788, 100.92.11.63, > executor 26): org.apache.spark.SparkException: Task failed while writing rows. > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) > at org.apache.spark.scheduler.Task.run(Task.scala:123) > at > org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) > at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown > Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) > at java.base/java.lang.Thread.run(Unknown Source) > Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: > s3a://xxx/__magic/app-attempt-/task_20200911063607_0001_m_001562.pendingset > already exists > at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:761) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987) > at > org.apache.hadoop.util.JsonSerialization.save(JsonSerialization.java:269) > at > org.apache.hadoop.fs.s3a.commit.files.PendingSet.save(PendingSet.java:170) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.innerCommitTask(MagicS3GuardCommitter.java:220) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.commitTask(MagicS3GuardCommitter.java:165) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77) > at > org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:244) > at > org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242) > {code} > {code} > 20/09/11 07:44:38 ERROR TaskSetManager: Task 957.1 in stage 1.0 (TID 1
[jira] [Assigned] (HADOOP-17258) MagicS3GuardCommitter fails with `pendingset` already exists
[ https://issues.apache.org/jira/browse/HADOOP-17258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-17258: --- Assignee: Dongjoon Hyun > MagicS3GuardCommitter fails with `pendingset` already exists > > > Key: HADOOP-17258 > URL: https://issues.apache.org/jira/browse/HADOOP-17258 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Major > Labels: pull-request-available > Time Spent: 4.5h > Remaining Estimate: 0h > > In `trunk/branch-3.3/branch-3.2`, `MagicS3GuardCommitter.innerCommitTask` has > `false` at `pendingSet.save`. > {code} > try { > pendingSet.save(getDestFS(), taskOutcomePath, false); > } catch (IOException e) { > LOG.warn("Failed to save task commit data to {} ", > taskOutcomePath, e); > abortPendingUploads(context, pendingSet.getCommits(), true); > throw e; > } > {code} > And, it can cause a job failure like the following. > {code} > WARN TaskSetManager: Lost task 1562.1 in stage 1.0 (TID 1788, 100.92.11.63, > executor 26): org.apache.spark.SparkException: Task failed while writing rows. > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) > at org.apache.spark.scheduler.Task.run(Task.scala:123) > at > org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) > at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown > Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) > at java.base/java.lang.Thread.run(Unknown Source) > Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: > s3a://xxx/__magic/app-attempt-/task_20200911063607_0001_m_001562.pendingset > already exists > at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:761) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987) > at > org.apache.hadoop.util.JsonSerialization.save(JsonSerialization.java:269) > at > org.apache.hadoop.fs.s3a.commit.files.PendingSet.save(PendingSet.java:170) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.innerCommitTask(MagicS3GuardCommitter.java:220) > at > org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.commitTask(MagicS3GuardCommitter.java:165) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50) > at > org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77) > at > org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:244) > at > org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247) > at > org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242) > {code} > {code} > 20/09/11 07:44:38 ERROR TaskSetManager: Task 957.1 in stage 1.0 (TID 1412) > can not write to output file: > org.apache.hadoop.fs.FileAlreadyExistsException: > s3a://xxx/t/__magic/app-attempt-/task_20200911073922_0001_m_000957.pendingset > already exists; not retrying > {code} > The above happens in EKS with S3 environment and the job failure happens when > some executor containers are killed by K8s -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2371: HADOOP-17258. Magic S3Guard Committer to overwrite existing pendingSet file on task commit
steveloughran commented on pull request #2371: URL: https://github.com/apache/hadoop/pull/2371#issuecomment-707096741 +1, merged to 3.3+ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org