[GitHub] [hadoop] langlaile1221 opened a new pull request #2566: HDFS-15739. Add missing Javadoc for a param in DFSNetworkTopology
langlaile1221 opened a new pull request #2566: URL: https://github.com/apache/hadoop/pull/2566 Only add missing Javadoc for a param in method chooseRandomWithStorageType of DFSNetworkTopology.java. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] langlaile1221 closed pull request #2565: HDFS-15739.Add missing Javadoc for a param in method chooseRandomWit…
langlaile1221 closed pull request #2565: URL: https://github.com/apache/hadoop/pull/2565 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] runitao commented on pull request #2565: HDFS-15739.Add missing Javadoc for a param in method chooseRandomWit…
runitao commented on pull request #2565: URL: https://github.com/apache/hadoop/pull/2565#issuecomment-748413464 +1 LGTM. The failed UTs are unrelated with this issue. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17440) Downgrade guava version in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-17440: - Description: See details the Jira HADOOP-17439 comments. h1. was: See details the Jira HADOOP-17439 comments. h1. > Downgrade guava version in trunk > > > Key: HADOOP-17440 > URL: https://issues.apache.org/jira/browse/HADOOP-17440 > Project: Hadoop Common > Issue Type: Task >Reporter: Lisheng Sun >Priority: Major > Attachments: HADOOP-17440.001.patch > > > See details the Jira HADOOP-17439 comments. > h1. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17252091#comment-17252091 ] Lisheng Sun commented on HADOOP-17439: -- Thank [~ste...@apache.org] for your attention to the issue. The problem I have is that hive's lib includes guava-11.jar, hadoop's lib includes guava-27.0-jre.jar and live's classpath includes hadoop' s classpath. When the method of guava is used in hive, the method cannot be found due to incompatible guava versions. Other dependent components will also encounter similar problems. > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17252071#comment-17252071 ] Yongjun Zhang commented on HADOOP-17338: Many thanks to [~ste...@apache.org] for reviewing and committing the PR! > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Attachments: HADOOP-17338.001.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at
[GitHub] [hadoop] NickyYe commented on pull request #2562: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
NickyYe commented on pull request #2562: URL: https://github.com/apache/hadoop/pull/2562#issuecomment-748338149 > Thanks for the information - this may explain why HDFS-12703 was needed, as some exceptions which were not logged at that time, caused the decommission thread to stop running until the NN was restarted. The change there was to catch the exception. > > The change here looks correct to me, but as the issue exists on the trunk branch, we should fix it there first, and then backport to 3.3, 3.2, 3.1 and 2.10 so the fix is in place across all branches. Due to HDFS-14854, the fix on trunk could be a very different one, since it doesn't make sense to change the new interface with a boolean parameter to stopTrackingNode while DatanodeAdminBackoffMonitor does't need. Looks a better fix would be introduce a cancelledNodes to DatanodeAdminDefaultMonitor, just like DatanodeAdminBackoffMonitor . Then in stopTrackingNode, don't remove dn from outOfServiceNodeBlocks, but add it to cancelledNodes for further process. However, the change would be a little bit bigger. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark
[ https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=526150=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526150 ] ASF GitHub Bot logged work on HADOOP-17414: --- Author: ASF GitHub Bot Created on: 18/Dec/20 21:46 Start Date: 18/Dec/20 21:46 Worklog Time Spent: 10m Work Description: dongjoon-hyun commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-748335153 For the reviewers~ > for who? It's because it's still not an official patch until this lands on the release branches. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 526150) Time Spent: 3h (was: 2h 50m) > Magic committer files don't have the count of bytes written collected by spark > -- > > Key: HADOOP-17414 > URL: https://issues.apache.org/jira/browse/HADOOP-17414 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > The spark statistics tracking doesn't correctly assess the size of the > uploaded files as it only calls getFileStatus on the zero byte objects -not > the yet-to-manifest files. Which, given they don't exist yet, isn't easy to > do. > Solution: > * Add getXAttr and listXAttr API calls to S3AFileSystem > * Return all S3 object headers as XAttr attributes prefixed "header." That's > custom and standard (e.g header.Content-Length). > The setXAttr call isn't implemented, so for correctness the FS doesn't > declare its support for the API in hasPathCapability(). > The magic commit file write sets the custom header > set the length of the data final data in the header > x-hadoop-s3a-magic-data-length in the marker file. > A matching patch in Spark will look for the XAttr > "header.x-hadoop-s3a-magic-data-length" when the file > being probed for output data is zero byte long. > As a result, the job tracking statistics will report the > bytes written but yet to be manifest. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dongjoon-hyun commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark
dongjoon-hyun commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-748335153 For the reviewers~ > for who? It's because it's still not an official patch until this lands on the release branches. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=526148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526148 ] ASF GitHub Bot logged work on HADOOP-17338: --- Author: ASF GitHub Bot Created on: 18/Dec/20 21:31 Start Date: 18/Dec/20 21:31 Worklog Time Spent: 10m Work Description: yzhangal commented on pull request #2497: URL: https://github.com/apache/hadoop/pull/2497#issuecomment-748329251 > ok. merged to trunk & just doing the 3.3 branch now Great, thanks so much @steveloughran ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 526148) Time Spent: 3h 50m (was: 3h 40m) > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Attachments: HADOOP-17338.001.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at >
[GitHub] [hadoop] yzhangal commented on pull request #2497: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …
yzhangal commented on pull request #2497: URL: https://github.com/apache/hadoop/pull/2497#issuecomment-748329251 > ok. merged to trunk & just doing the 3.3 branch now Great, thanks so much @steveloughran ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel commented on pull request #2562: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
sodonnel commented on pull request #2562: URL: https://github.com/apache/hadoop/pull/2562#issuecomment-748318475 Thanks for the information - this may explain why HDFS-12703 was needed, as some exceptions which were not logged at that time, caused the decommission thread to stop running until the NN was restarted. The change there was to catch the exception. The change here looks correct to me, but as the issue exists on the trunk branch, we should fix it there first, and then backport to 3.3, 3.2, 3.1 and 2.10 so the fix is in place across all branches. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17438) Increase docker memory limit in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-17438?focusedWorklogId=526140=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526140 ] ASF GitHub Bot logged work on HADOOP-17438: --- Author: ASF GitHub Bot Created on: 18/Dec/20 20:52 Start Date: 18/Dec/20 20:52 Worklog Time Spent: 10m Work Description: amahussein commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-748313031 > I asked the infrastructure team how much memory we can use: https://issues.apache.org/jira/browse/INFRA-21207 Thanks @aajisaka ! > It would be nice to know what's tying up all of our memory. Because 20 GB is a lot for us to be using for unit tests @ericbadger . Definitely! It would be easier to narrow the scope of investigation if we had this error reported the earliest it started to happen. Unfortunately, it does not seem that it is the case. The other way is to profile the memory of the image during the execution. Do you have any suggestion of how to approach this? A straightforward way would be to dump the system men to the log at the beginning of each module. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 526140) Time Spent: 1h (was: 50m) > Increase docker memory limit in Jenkins > --- > > Key: HADOOP-17438 > URL: https://issues.apache.org/jira/browse/HADOOP-17438 > Project: Hadoop Common > Issue Type: Bug > Components: build, scripts, test, yetus >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Yetus keeps failing with OOM. > > {code:bash} > unable to create new native thread > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:717) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) > at > java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at > org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369) > at > org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} > > This jira to increase the memory limit from 20g to 22g. > *Note: This is only a workaround to get things more productive. If this > change reduces the frequency of the OOM failure, there must be a follow-up > profile the runtime to figure out which components are causing the docker to > run out of memory.* > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], > [~iwasakims], [~ayushtkn], [~inigoiri] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2560: HADOOP-17438. Increase docker memory limit in Jenkins.
amahussein commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-748313031 > I asked the infrastructure team how much memory we can use: https://issues.apache.org/jira/browse/INFRA-21207 Thanks @aajisaka ! > It would be nice to know what's tying up all of our memory. Because 20 GB is a lot for us to be using for unit tests @ericbadger . Definitely! It would be easier to narrow the scope of investigation if we had this error reported the earliest it started to happen. Unfortunately, it does not seem that it is the case. The other way is to profile the memory of the image during the execution. Do you have any suggestion of how to approach this? A straightforward way would be to dump the system men to the log at the beginning of each module. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe commented on pull request #2562: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
NickyYe commented on pull request #2562: URL: https://github.com/apache/hadoop/pull/2562#issuecomment-748285060 > ConcurrentModificationException Thanks for looking. If there are only 2 datanodes in outOfServiceNodeBlocks and the first one is removed, then it will be a dead loop on the second datanode. If there are more than 2 datanodes and the first one is removed, there will be a ConcurrentModificationException. I see both two cases in our prod very often. This issue only happens when remove (dnAdmin.stopMaintenance(dn);). By outOfServiceNodeBlocks.put(dn, blocks), it only updates the value, so Cyclic Iteration won't be affected This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17338: Parent: HADOOP-16829 Issue Type: Sub-task (was: Bug) > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Attachments: HADOOP-17338.001.patch > > Time Spent: 3h 40m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at >
[jira] [Resolved] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17338. - Fix Version/s: 3.3.1 Resolution: Fixed > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Attachments: HADOOP-17338.001.patch > > Time Spent: 3h 40m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at >
[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=526109=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526109 ] ASF GitHub Bot logged work on HADOOP-17338: --- Author: ASF GitHub Bot Created on: 18/Dec/20 19:15 Start Date: 18/Dec/20 19:15 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2497: URL: https://github.com/apache/hadoop/pull/2497#issuecomment-748271861 ok. merged to trunk & just doing the 3.3 branch now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 526109) Time Spent: 3h 40m (was: 3.5h) > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17338.001.patch > > Time Spent: 3h 40m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at >
[GitHub] [hadoop] steveloughran commented on pull request #2497: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …
steveloughran commented on pull request #2497: URL: https://github.com/apache/hadoop/pull/2497#issuecomment-748271861 ok. merged to trunk & just doing the 3.3 branch now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=526106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526106 ] ASF GitHub Bot logged work on HADOOP-17338: --- Author: ASF GitHub Bot Created on: 18/Dec/20 19:08 Start Date: 18/Dec/20 19:08 Worklog Time Spent: 10m Work Description: steveloughran merged pull request #2497: URL: https://github.com/apache/hadoop/pull/2497 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 526106) Time Spent: 3.5h (was: 3h 20m) > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17338.001.patch > > Time Spent: 3.5h > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at >
[GitHub] [hadoop] steveloughran merged pull request #2497: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …
steveloughran merged pull request #2497: URL: https://github.com/apache/hadoop/pull/2497 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein closed pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed
amahussein closed pull request #2456: URL: https://github.com/apache/hadoop/pull/2456 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed
amahussein commented on pull request #2456: URL: https://github.com/apache/hadoop/pull/2456#issuecomment-748259069 I will close this PR until there is a clear specification on `close()` interface. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2565: HDFS-15739.Add missing Javadoc for a param in method chooseRandomWit…
hadoop-yetus commented on pull request #2565: URL: https://github.com/apache/hadoop/pull/2565#issuecomment-748255914 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 51s | | trunk passed | | +1 :green_heart: | compile | 1m 19s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 1s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 3m 3s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 1s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 5s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 1m 5s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 39s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 10s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 53s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 3m 2s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 97m 12s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2565/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 186m 57s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2565/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2565 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3be14723aaa4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7a88f453667 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2565/1/testReport/ | | Max. process+thread count | 4090 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2565/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | |
[GitHub] [hadoop] steveloughran commented on pull request #2548: DRAFT PR: Implementing ListStatusRemoteIterator
steveloughran commented on pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#issuecomment-748226755 we should talk about this in 2021. For now * see #2553 for IOStatistics collection *including in remote iterators*, and a class *RemoteIterators* to help you wrap them * look @ mukund's work HADOOP-17400 including the issue of when to report failures I think it makes sense to have an over all "optimise abfs incremental listings" JIRA and create issues underneath, as a lot is unified. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2551: ADLS Gen1: Updating gen1 SDK version from 2.3.6 to 2.3.9
steveloughran commented on pull request #2551: URL: https://github.com/apache/hadoop/pull/2551#issuecomment-748224170 @bilaharith before I merge this, file the hadoop JIRA & update the title of this PR to it. I wouldn't even have noticed the patch surfacing if I wasn't trying out code review through Intelllij IDEA. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251913#comment-17251913 ] Steve Loughran commented on HADOOP-17439: - # we haven't removed transient guava dependencies; cutting out guava.jar from the CP could have surprises. # its now possible for downstream apps to do things here. What's the actual problem we are trying to solve and what side effects are likely? +[~gabor.bota] > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2551: ADLS Gen1: Updating gen1 SDK version from 2.3.6 to 2.3.9
steveloughran commented on pull request #2551: URL: https://github.com/apache/hadoop/pull/2551#issuecomment-748221293 +1, merging. I like the explicit wildfly declaration. Wildfly + openssl versions are a source of pain. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2551: ADLS Gen1: Updating gen1 SDK version from 2.3.6 to 2.3.9
steveloughran commented on a change in pull request #2551: URL: https://github.com/apache/hadoop/pull/2551#discussion_r545981728 ## File path: hadoop-tools/hadoop-azure-datalake/pom.xml ## @@ -166,5 +166,12 @@ test test-jar + + + org.wildfly.openssl + wildfly-openssl + compile + Review comment: just because hadoop-project declares an artifact doesn't mean it comes in on the classpath. hadoop-common compiles with it but doesn't force it downstream, so modules which needed it do have to declare it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2551: ADLS Gen1: Updating gen1 SDK version from 2.3.6 to 2.3.9
steveloughran commented on pull request #2551: URL: https://github.com/apache/hadoop/pull/2551#issuecomment-748219208 older versions of the library had wildfly embedded inside it, which was a support of pain on its own; presumably later versions pulled it out, which is why it is explicitly needed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=526070=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526070 ] ASF GitHub Bot logged work on HADOOP-17338: --- Author: ASF GitHub Bot Created on: 18/Dec/20 17:26 Start Date: 18/Dec/20 17:26 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2455: URL: https://github.com/apache/hadoop/pull/2455#issuecomment-748217627 closing this as superceded by #2497; This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 526070) Time Spent: 3h 10m (was: 3h) > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17338.001.patch > > Time Spent: 3h 10m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at >
[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=526071=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526071 ] ASF GitHub Bot logged work on HADOOP-17338: --- Author: ASF GitHub Bot Created on: 18/Dec/20 17:26 Start Date: 18/Dec/20 17:26 Worklog Time Spent: 10m Work Description: steveloughran closed pull request #2455: URL: https://github.com/apache/hadoop/pull/2455 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 526071) Time Spent: 3h 20m (was: 3h 10m) > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17338.001.patch > > Time Spent: 3h 20m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at >
[GitHub] [hadoop] steveloughran closed pull request #2455: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …
steveloughran closed pull request #2455: URL: https://github.com/apache/hadoop/pull/2455 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2455: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …
steveloughran commented on pull request #2455: URL: https://github.com/apache/hadoop/pull/2455#issuecomment-748217627 closing this as superceded by #2497; This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17440) Downgrade guava version in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251870#comment-17251870 ] Hadoop QA commented on HADOOP-17440: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 50s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 32s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 41m 27s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 11s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green}{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 26s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | || || || || {color:brown} Other Tests {color} || || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green}{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense
[jira] [Work logged] (HADOOP-17430) There is no way to clear Text bytes now
[ https://issues.apache.org/jira/browse/HADOOP-17430?focusedWorklogId=526036=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-526036 ] ASF GitHub Bot logged work on HADOOP-17430: --- Author: ASF GitHub Bot Created on: 18/Dec/20 16:14 Start Date: 18/Dec/20 16:14 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2545: URL: https://github.com/apache/hadoop/pull/2545#issuecomment-748181279 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 45s | | trunk passed | | +1 :green_heart: | compile | 21m 56s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 18m 38s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 8s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 2m 20s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 18s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 19m 14s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 19m 14s | | the patch passed | | +1 :green_heart: | compile | 17m 18s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 18s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 31s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 17s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 33s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 2m 25s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 43s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 174m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2545 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bd437af5da2a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7a88f453667 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/4/testReport/ | | Max. process+thread count | 3258 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/4/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT
[GitHub] [hadoop] hadoop-yetus commented on pull request #2545: HADOOP-17430. Add clear bytes logic for hadoop Text
hadoop-yetus commented on pull request #2545: URL: https://github.com/apache/hadoop/pull/2545#issuecomment-748181279 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 45s | | trunk passed | | +1 :green_heart: | compile | 21m 56s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 18m 38s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 8s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 2m 20s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 18s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 19m 14s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 19m 14s | | the patch passed | | +1 :green_heart: | compile | 17m 18s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 18s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 31s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 17s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 1s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 33s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 2m 25s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 43s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 174m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2545 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bd437af5da2a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7a88f453667 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/4/testReport/ | | Max. process+thread count | 3258 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/4/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2564: YARN-10538: Add RECOMMISSIONING nodes to the list of updated nodes returned to the AM
hadoop-yetus commented on pull request #2564: URL: https://github.com/apache/hadoop/pull/2564#issuecomment-748173635 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 25s | | trunk passed | | +1 :green_heart: | compile | 1m 2s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 53s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 1s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 45s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 49s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 45s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 49s | | the patch passed | | +1 :green_heart: | compile | 0m 51s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 51s | | the patch passed | | +1 :green_heart: | compile | 0m 47s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 47s | | the patch passed | | -0 :warning: | checkstyle | 0m 32s | [/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 18 unchanged - 0 fixed = 19 total (was 18) | | +1 :green_heart: | mvnsite | 0m 48s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 5s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 39s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 34s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 1m 47s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 89m 21s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 170m 22s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2564 | | JIRA Issue | YARN-10538 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d32f50c9baa4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7a88f453667 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
[jira] [Updated] (HADOOP-17440) Downgrade guava version in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-17440: - Attachment: HADOOP-17440.001.patch Status: Patch Available (was: Open) > Downgrade guava version in trunk > > > Key: HADOOP-17440 > URL: https://issues.apache.org/jira/browse/HADOOP-17440 > Project: Hadoop Common > Issue Type: Task >Reporter: Lisheng Sun >Priority: Major > Attachments: HADOOP-17440.001.patch > > > See details the Jira HADOOP-17439 comments. > h1. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17440) Downgrade guava version in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-17440: - Description: See details the Jira HADOOP-17439 comments. h1. > Downgrade guava version in trunk > > > Key: HADOOP-17440 > URL: https://issues.apache.org/jira/browse/HADOOP-17440 > Project: Hadoop Common > Issue Type: Task >Reporter: Lisheng Sun >Priority: Major > Attachments: HADOOP-17440.001.patch > > > See details the Jira HADOOP-17439 comments. > h1. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17440) Downgrade guava version in trunk
Lisheng Sun created HADOOP-17440: Summary: Downgrade guava version in trunk Key: HADOOP-17440 URL: https://issues.apache.org/jira/browse/HADOOP-17440 Project: Hadoop Common Issue Type: Task Reporter: Lisheng Sun -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] langlaile1221 opened a new pull request #2565: HDFS-15739.Add missing Javadoc for a param in method chooseRandomWit…
langlaile1221 opened a new pull request #2565: URL: https://github.com/apache/hadoop/pull/2565 Only add missing Javadoc for a param in method chooseRandomWithStorageType of AppSchedulingInfo.java. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251829#comment-17251829 ] Ayush Saxena commented on HADOOP-17439: --- Yeps, I totally agree rolling back guava, Stack too was in agreement then. May be you can spawn up a jira asking for that, if no one complains you can do that. I am +1 though for rolling back, since the hadoop code isn't using it > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251806#comment-17251806 ] Ayush Saxena edited comment on HADOOP-17439 at 12/18/20, 3:32 PM: -- I want to do that, Not sure if the process allows that, The crazy thing is Hive worked for me with this patch I ran a couple of unit tests as well from hive which fails with NoClassDef error if you don't have this patch, With this they were passing. {{hive-exec}} I guess shades guava? So from that side it shouldn't be problem. Yahh, but if hive-classpath is blindly taking the entire hadoop-classpath, we should fix this at hive, The other common dependencies like netty and stuff can bother too, if both hadoop and hive has different versions? If that is surfacing then while upgrading any downstream project, they can/should exclude guava from hadoop from there classpath. This patch gives a doable way, some doable changes downstream still have to do, this paves out a way to do so. Makes things simple, not trivial enough that just change the number was (Author: ayushtkn): I want to do that, Not sure if the process allows that, The crazy thing is Hive worked for me with this patch I ran a couple of unit tests as well from hive which fails with NoClassDef error if you don't have this patch, With this they were passing. {{hive-exec}} I guess shades guava? So from that side it shouldn't be problem. Yahh, but if hive-classpath is blindly taking the entire hadoop-classpath, we should fix this at hive, The other common dependencies like netty and stuff can bother too, if both hadoop and hive has different versions? > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251823#comment-17251823 ] Lisheng Sun commented on HADOOP-17439: -- i very much agree with you. But our company have dozens of component like hive , which's classpatch is blindly taking the entire hadoop-classpath. The cost of modifying dependent components in this way as you is too high, so I want to repair the guava of hadoop. > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251806#comment-17251806 ] Ayush Saxena commented on HADOOP-17439: --- I want to do that, Not sure if the process allows that, The crazy thing is Hive worked for me with this patch I ran a couple of unit tests as well from hive which fails with NoClassDef error if you don't have this patch, With this they were passing. {{hive-exec}} I guess shades guava? So from that side it shouldn't be problem. Yahh, but if hive-classpath is blindly taking the entire hadoop-classpath, we should fix this at hive, The other common dependencies like netty and stuff can bother too, if both hadoop and hive has different versions? > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251801#comment-17251801 ] Lisheng Sun edited comment on HADOOP-17439 at 12/18/20, 3:03 PM: - I don’t understand what the current patch solves. Could you give me an example? Thank you [~ayushtkn] Do we have plans to downgrade guava to 11 or other version in trunk? was (Author: leosun08): I don’t understand what the current patch solves. Could you give me an example? Thank you [~ayushtkn] Do we have plans to downgrade guava to version 11 in trunk? > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251801#comment-17251801 ] Lisheng Sun edited comment on HADOOP-17439 at 12/18/20, 3:01 PM: - I don’t understand what the current patch solves. Could you give me an example? Thank you [~ayushtkn] Do we have plans to downgrade guava to version 11 in trunk? was (Author: leosun08): I don’t understand what the current patch solves. Could you give me an example? Thank you [~ayushtkn] > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251801#comment-17251801 ] Lisheng Sun commented on HADOOP-17439: -- I don’t understand what the current patch solves. Could you give me an example? Thank you [~ayushtkn] > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251794#comment-17251794 ] Ayush Saxena commented on HADOOP-17439: --- If there was no shading, then also guava jar was there. So, I don't think shading any way induced it, and removing it from hadoop classpath won't be a safe option, Because in that case hadoop dependencies shall break. you can try downgrading the guava version and build hadoop, That would solve the problem. Earlier I too thought of doing that: https://issues.apache.org/jira/browse/HADOOP-17288?focusedCommentId=17208607=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17208607 But left as in the present state version of guava doesn't bother Hadoop much, and downgrading isn't a very acceptable solution > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251788#comment-17251788 ] Lisheng Sun commented on HADOOP-17439: -- The problem I have is that hive's lib includes guava-11.jar, hadoop's lib includes guava-27.0-jre.jar and live's classpath includes hadoop' s classpath. When the method of guava is used in hive, the method cannot be found due to incompatible guava versions. Other dependent components will also encounter similar problems. So i think we can remove the orginal guava and keep shaded gauva. > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251783#comment-17251783 ] Lisheng Sun edited comment on HADOOP-17439 at 12/18/20, 2:21 PM: - hi [~ayushtkn] Currently, if one component which relies on hadoop has other version of guava, there will still be guava version conflicts, right? was (Author: leosun08): Currently, if one component which relies on hadoop has other version of guava, there will still be guava version conflicts, right? > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251785#comment-17251785 ] Ayush Saxena commented on HADOOP-17439: --- The hadoop code won't be using the original guava itself we changed the import itself. it has prefix org.apache.hadoop.thirdparty.com.google.common. This class won't be present in the original guava jar. What issue you are facing due to this? The original jar is kept for the dependencies of hadoop, say hadoop depends on curator and curator requires guava, Curator will use this original guava, not the hadoop jars > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251783#comment-17251783 ] Lisheng Sun commented on HADOOP-17439: -- Currently, if one component which relies on hadoop has other version of guava, there will still be guava version conflicts, right? > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty
[ https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251780#comment-17251780 ] Lisheng Sun commented on HADOOP-17288: -- hi [~ayushtkn] I found no shaded guava in trunk of my local. I don’t know whether it’s my environment or other problems. > Use shaded guava from thirdparty > > > Key: HADOOP-17288 > URL: https://issues.apache.org/jira/browse/HADOOP-17288 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Time Spent: 5h 10m > Remaining Estimate: 0h > > Use the shaded version of guava in hadoop-thirdparty -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HADOOP-17439: - Summary: No shade guava in trunk (was: No shade guava in branch) > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17439) No shade guava in trunk
[ https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251778#comment-17251778 ] Ayush Saxena commented on HADOOP-17439: --- That is expected, Is that creating any issues? > No shade guava in trunk > --- > > Key: HADOOP-17439 > URL: https://issues.apache.org/jira/browse/HADOOP-17439 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Lisheng Sun >Priority: Major > Attachments: image-2020-12-18-22-01-45-424.png > > > !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17439) No shade guava in branch
Lisheng Sun created HADOOP-17439: Summary: No shade guava in branch Key: HADOOP-17439 URL: https://issues.apache.org/jira/browse/HADOOP-17439 Project: Hadoop Common Issue Type: Sub-task Reporter: Lisheng Sun Attachments: image-2020-12-18-22-01-45-424.png !image-2020-12-18-22-01-45-424.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] srinivasst commented on pull request #2564: YARN-10538: Add RECOMMISSIONING nodes to the list of updated nodes returned to the AM
srinivasst commented on pull request #2564: URL: https://github.com/apache/hadoop/pull/2564#issuecomment-748079468 @abmodi @bibinchundatt can you please review this This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17430) There is no way to clear Text bytes now
[ https://issues.apache.org/jira/browse/HADOOP-17430?focusedWorklogId=525978=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525978 ] ASF GitHub Bot logged work on HADOOP-17430: --- Author: ASF GitHub Bot Created on: 18/Dec/20 13:12 Start Date: 18/Dec/20 13:12 Worklog Time Spent: 10m Work Description: dgzdot commented on a change in pull request #2545: URL: https://github.com/apache/hadoop/pull/2545#discussion_r545821814 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java ## @@ -226,6 +226,12 @@ public void set(String string) { * Set to a utf8 byte array. */ public void set(byte[] utf8) { +if(utf8.length == 0){ + bytes = EMPTY_BYTES; + length = 0; + textLength = -1; + return; Review comment: Thank you for your advice. I'll fix it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525978) Time Spent: 2h 20m (was: 2h 10m) > There is no way to clear Text bytes now > --- > > Key: HADOOP-17430 > URL: https://issues.apache.org/jira/browse/HADOOP-17430 > Project: Hadoop Common > Issue Type: Wish > Components: common >Reporter: gaozhan ding >Priority: Minor > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > In org.apache.hadoop.io.Text:clear() method, the comments show that we can > free the bytes by call set(new byte[0]), but it's not going to work now. > Maybe we can follow this comments. > > > {code:java} > // org.apache.hadoop.io.Text > /** > * Clear the string to empty. > * > * Note: For performance reasons, this call does not clear the > * underlying byte array that is retrievable via {@link #getBytes()}. > * In order to free the byte-array memory, call {@link #set(byte[])} > * with an empty byte array (For example, new byte[0]). > */ > public void clear() { > length = 0; > textLength = -1; > } > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dgzdot commented on a change in pull request #2545: HADOOP-17430. Add clear bytes logic for hadoop Text
dgzdot commented on a change in pull request #2545: URL: https://github.com/apache/hadoop/pull/2545#discussion_r545821814 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java ## @@ -226,6 +226,12 @@ public void set(String string) { * Set to a utf8 byte array. */ public void set(byte[] utf8) { +if(utf8.length == 0){ + bytes = EMPTY_BYTES; + length = 0; + textLength = -1; + return; Review comment: Thank you for your advice. I'll fix it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] srinivasst opened a new pull request #2564: YARN-10538: Add RECOMMISSIONING nodes to the list of updated nodes returned to the AM
srinivasst opened a new pull request #2564: URL: https://github.com/apache/hadoop/pull/2564 YARN-6483 introduced nodes that transitioned to DECCMISSIONING state to the list of updated nodes returned to the AM. This allows the Spark application master to gracefully decommission its containers on the decommissioning node. But if the node were to be recommissioned, the Spark application master would not be aware of this. This PR adds the recommissioned node to the list of updated nodes sent to the AM when a recommission node transition occurs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17430) There is no way to clear Text bytes now
[ https://issues.apache.org/jira/browse/HADOOP-17430?focusedWorklogId=525971=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525971 ] ASF GitHub Bot logged work on HADOOP-17430: --- Author: ASF GitHub Bot Created on: 18/Dec/20 12:49 Start Date: 18/Dec/20 12:49 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2545: URL: https://github.com/apache/hadoop/pull/2545#discussion_r545809363 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java ## @@ -226,6 +226,12 @@ public void set(String string) { * Set to a utf8 byte array. */ public void set(byte[] utf8) { +if(utf8.length == 0){ + bytes = EMPTY_BYTES; + length = 0; + textLength = -1; + return; Review comment: I think I'd prefer to cut the return here and just have an `else` clause for the normal set() operation. This isn't so long and complex a piece of code that an early return makes sense; it can be easy to miss that it's there when reading the code. ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java ## @@ -226,6 +226,12 @@ public void set(String string) { * Set to a utf8 byte array. */ public void set(byte[] utf8) { +if(utf8.length == 0){ Review comment: nit: can you add spaces either side of the () ,e.g ``` if (utf8.length == 0) { ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525971) Time Spent: 2h 10m (was: 2h) > There is no way to clear Text bytes now > --- > > Key: HADOOP-17430 > URL: https://issues.apache.org/jira/browse/HADOOP-17430 > Project: Hadoop Common > Issue Type: Wish > Components: common >Reporter: gaozhan ding >Priority: Minor > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > In org.apache.hadoop.io.Text:clear() method, the comments show that we can > free the bytes by call set(new byte[0]), but it's not going to work now. > Maybe we can follow this comments. > > > {code:java} > // org.apache.hadoop.io.Text > /** > * Clear the string to empty. > * > * Note: For performance reasons, this call does not clear the > * underlying byte array that is retrievable via {@link #getBytes()}. > * In order to free the byte-array memory, call {@link #set(byte[])} > * with an empty byte array (For example, new byte[0]). > */ > public void clear() { > length = 0; > textLength = -1; > } > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2545: HADOOP-17430. Add clear bytes logic for hadoop Text
steveloughran commented on a change in pull request #2545: URL: https://github.com/apache/hadoop/pull/2545#discussion_r545809363 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java ## @@ -226,6 +226,12 @@ public void set(String string) { * Set to a utf8 byte array. */ public void set(byte[] utf8) { +if(utf8.length == 0){ + bytes = EMPTY_BYTES; + length = 0; + textLength = -1; + return; Review comment: I think I'd prefer to cut the return here and just have an `else` clause for the normal set() operation. This isn't so long and complex a piece of code that an early return makes sense; it can be easy to miss that it's there when reading the code. ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java ## @@ -226,6 +226,12 @@ public void set(String string) { * Set to a utf8 byte array. */ public void set(byte[] utf8) { +if(utf8.length == 0){ Review comment: nit: can you add spaces either side of the () ,e.g ``` if (utf8.length == 0) { ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark
[ https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=525966=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525966 ] ASF GitHub Bot logged work on HADOOP-17414: --- Author: ASF GitHub Bot Created on: 18/Dec/20 12:40 Start Date: 18/Dec/20 12:40 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-748063774 > Gentle ping~ for who? I'm happy with this, the only open design issue is "should we lower case all the classic headers?" Primarily as it reduces the risk of someone getting the .equals wrong in different countries This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525966) Time Spent: 2h 50m (was: 2h 40m) > Magic committer files don't have the count of bytes written collected by spark > -- > > Key: HADOOP-17414 > URL: https://issues.apache.org/jira/browse/HADOOP-17414 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > The spark statistics tracking doesn't correctly assess the size of the > uploaded files as it only calls getFileStatus on the zero byte objects -not > the yet-to-manifest files. Which, given they don't exist yet, isn't easy to > do. > Solution: > * Add getXAttr and listXAttr API calls to S3AFileSystem > * Return all S3 object headers as XAttr attributes prefixed "header." That's > custom and standard (e.g header.Content-Length). > The setXAttr call isn't implemented, so for correctness the FS doesn't > declare its support for the API in hasPathCapability(). > The magic commit file write sets the custom header > set the length of the data final data in the header > x-hadoop-s3a-magic-data-length in the marker file. > A matching patch in Spark will look for the XAttr > "header.x-hadoop-s3a-magic-data-length" when the file > being probed for output data is zero byte long. > As a result, the job tracking statistics will report the > bytes written but yet to be manifest. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark
steveloughran commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-748063774 > Gentle ping~ for who? I'm happy with this, the only open design issue is "should we lower case all the classic headers?" Primarily as it reduces the risk of someone getting the .equals wrong in different countries This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel commented on pull request #2562: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
sodonnel commented on pull request #2562: URL: https://github.com/apache/hadoop/pull/2562#issuecomment-748053842 It looks like this same logic also exists in trunk - could you submit a trunk PR / patch and then we can backport the change across all active branches? I am also a little confused about this problem. The map `outOfServiceNodeBlocks` is modified in a few places in the middle of the Cyclic Iteration. If it threw an ConcurrentModificationException on modification, then I would expect us to be seeing this a lot. Probably anytime there is more than 1 node added to decommission / maintenance. Eg, from trunk DatanodeAdminDefaultMonitor.java, here `it` is a CyclicIterator over `outOfServiceNodeBlocks` ``` while (it.hasNext() && !exceededNumBlocksPerCheck() && namesystem .isRunning()) { numNodesChecked++; final Map.Entry> entry = it.next(); final DatanodeDescriptor dn = entry.getKey(); try { AbstractList blocks = entry.getValue(); boolean fullScan = false; if (dn.isMaintenance() && dn.maintenanceExpired()) { // If maintenance expires, stop tracking it. dnAdmin.stopMaintenance(dn); toRemove.add(dn); continue; } if (dn.isInMaintenance()) { // The dn is IN_MAINTENANCE and the maintenance hasn't expired yet. continue; } if (blocks == null) { // This is a newly added datanode, run through its list to schedule // under-replicated blocks for replication and collect the blocks // that are insufficiently replicated for further tracking LOG.debug("Newly-added node {}, doing full scan to find " + "insufficiently-replicated blocks.", dn); blocks = handleInsufficientlyStored(dn); outOfServiceNodeBlocks.put(dn, blocks); // Modifies outOfServiceNodeBlocks ... ``` Note that outOfServiceNodeBlocks is modified on the first pass, and so `it.next()` should throw an exception on the next iteration. Have you seen the ConcurrentModificationException logged due to this problem? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #309: MAPREDUCE-7017:Too many times of meaningless invocation in TaskAttemptImpl#resolveHosts
hadoop-yetus commented on pull request #309: URL: https://github.com/apache/hadoop/pull/309#issuecomment-748047426 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 46s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 33s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 44s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 3s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 0s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 24s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/8/artifact/out/whitespace-eol.txt) | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 14m 50s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | -1 :x: | findbugs | 1m 2s | [/new-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/8/artifact/out/new-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.html) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | _ Other Tests _ | | -1 :x: | unit | 8m 19s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/8/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt) | hadoop-mapreduce-client-app in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 84m 12s | | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app | | | Write to static field org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.appContext from instance method new org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl(TaskId, int, EventHandler, TaskAttemptListener, Path, int, JobConf, String[], Token, Credentials, Clock, AppContext) At TaskAttemptImpl.java:from instance method new org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl(TaskId, int, EventHandler, TaskAttemptListener, Path, int, JobConf, String[], Token, Credentials, Clock, AppContext) At TaskAttemptImpl.java:[line 675] | | Failed junit tests | hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt | | Subsystem |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2563: YARN-10463: For Federation, we should support getApplicationAttemptRe…
hadoop-yetus commented on pull request #2563: URL: https://github.com/apache/hadoop/pull/2563#issuecomment-747952910 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 30m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 34s | | trunk passed | | +1 :green_heart: | compile | 0m 31s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 28s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 34s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 20s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 47s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 45s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 23s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 23s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 14s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 0m 47s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 2m 40s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 106m 2s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2563 | | JIRA Issue | YARN-10463 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5ccfb669dc74 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7a88f453667 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/3/testReport/ | | Max. process+thread count | 936 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific
[GitHub] [hadoop] hadoop-yetus commented on pull request #2563: YARN-10463: For Federation, we should support getApplicationAttemptRe…
hadoop-yetus commented on pull request #2563: URL: https://github.com/apache/hadoop/pull/2563#issuecomment-747944086 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 2s | | trunk passed | | +1 :green_heart: | compile | 0m 28s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 26s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 26s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 45s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 43s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 13s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 23s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 17m 33s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 0m 43s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 2m 34s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 85m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2563 | | JIRA Issue | YARN-10463 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c2f1523df2f0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7a88f453667 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/2/testReport/ | | Max. process+thread count | 877 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific
[GitHub] [hadoop] hadoop-yetus commented on pull request #2563: YARN-10463: For Federation, we should support getApplicationAttemptRe…
hadoop-yetus commented on pull request #2563: URL: https://github.com/apache/hadoop/pull/2563#issuecomment-747936623 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 23s | | trunk passed | | +1 :green_heart: | compile | 0m 31s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 26s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 28s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 9s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 50s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 46s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | | the patch passed | | +1 :green_heart: | compile | 0m 22s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | -0 :warning: | checkstyle | 0m 14s | [/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) | | +1 :green_heart: | mvnsite | 0m 22s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 29s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 0m 52s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 2m 46s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 84m 53s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2563 | | JIRA Issue | YARN-10463 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2880f3a0ca7d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7a88f453667 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/1/testReport/ | | Max. process+thread count | 878 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2563/1/console | |