[jira] [Created] (HADOOP-17267) Add debug-level logs in Filesystem#close

2020-09-17 Thread Karen Coppage (Jira)
Karen Coppage created HADOOP-17267:
--

 Summary: Add debug-level logs in Filesystem#close
 Key: HADOOP-17267
 URL: https://issues.apache.org/jira/browse/HADOOP-17267
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Karen Coppage


HDFS reuses the same cached FileSystem object across the file system. If the 
client calls FileSystem.close(), closeAllForUgi(), or closeAll() (if it applies 
to the instance) anywhere in the system it purges the cache of that FS 
instance, and trying to use the instance results in an IOException: FileSystem 
closed.

It would be a great help to clients to see where and when a given FS instance 
was closed. I.e. in close(), closeAllForUgi(), or closeAll(), it would be great 
to see a DEBUG-level log of
 * calling method name, class, file name/line number
 * FileSystem object's identity hash (FileSystem.close() only)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17266) Sudo in hadoop-functions.sh should preserve environment variables

2020-09-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17197490#comment-17197490
 ] 

Akira Ajisaka commented on HADOOP-17266:


Umm... The directory structure in Apache Hadoop releases should be as follows:
{noformat}
${HADOOP_HOME}/
├── bin/
│   └── hdfs
└── libexec/
└── hdfs-config.sh{noformat}
and that's why bin/../libexec/hdfs-config.sh should exist.

> Sudo in hadoop-functions.sh should preserve environment variables 
> --
>
> Key: HADOOP-17266
> URL: https://issues.apache.org/jira/browse/HADOOP-17266
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.3.0
>Reporter: Chengbing Liu
>Priority: Major
> Attachments: HADOOP-17266.001.patch
>
>
> Steps to reproduce:
> 1. Set {{HDFS_NAMENODE_USER=hdfs}} in {{/etc/default/hadoop-hdfs-namenode}} 
> to enable user check (and switch to {{hdfs}} to start/stop NameNode daemon)
> 2. Stop NameNode with: {{service hadoop-hdfs-namenode stop}}
> 3. Got an error and NameNode is not stopped
> {noformat}
> ERROR: Cannot execute /usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh.
> Failed to stop Hadoop namenode. Return value: 1. [FAILED]
> {noformat}
> The root cause is that after sudo, {{HADOOP_HOME=/usr/lib/hadoop}} is not 
> preserved, and {{/usr/lib/hadoop-hdfs/bin/hdfs}} locates libexec by the 
> following logic:
> {noformat}
> # let's locate libexec...
> if [[ -n "${HADOOP_HOME}" ]]; then
>   HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
> else
>   bin=$(cd -P -- "$(dirname -- "${MYNAME}")" >/dev/null && pwd -P)
>   HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
> fi
> {noformat}
> I believe the key point here is that we should preserve environment variables 
> when doing sudo.
> Note that this bug is not introduced by HDFS-15353, before which {{su -l}} is 
> used, which will also discard environment variables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant merged pull request #2296: HDFS-15568. namenode start failed to start when dfs.namenode.max.snapshot.limit set.

2020-09-17 Thread GitBox


bshashikant merged pull request #2296:
URL: https://github.com/apache/hadoop/pull/2296


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on pull request #2296: HDFS-15568. namenode start failed to start when dfs.namenode.max.snapshot.limit set.

2020-09-17 Thread GitBox


bshashikant commented on pull request #2296:
URL: https://github.com/apache/hadoop/pull/2296#issuecomment-694109588


   Thanks @szetszwo and @goiri for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2280: HADOOP-17244. S3A directory delete tombstones dir markers prematurely.

2020-09-17 Thread GitBox


hadoop-yetus removed a comment on pull request #2280:
URL: https://github.com/apache/hadoop/pull/2280#issuecomment-688527384







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17244) HADOOP-17244. S3A directory delete tombstones dir markers prematurely.

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17244?focusedWorklogId=485628&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485628
 ]

ASF GitHub Bot logged work on HADOOP-17244:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 09:58
Start Date: 17/Sep/20 09:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2280:
URL: https://github.com/apache/hadoop/pull/2280#issuecomment-688527384







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485628)
Time Spent: 1h 40m  (was: 1.5h)

> HADOOP-17244. S3A directory delete tombstones dir markers prematurely.
> --
>
> Key: HADOOP-17244
> URL: https://issues.apache.org/jira/browse/HADOOP-17244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Test failure: 
> {{ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory}}
> This is repeatable on -Dauth runs (we haven't been running them, have we?)
> Either its from the recent dir marker changes (initial hypothesis) or its 
> been lurking a while and not been picked up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2307: HADOOP-17250 Lot of short reads can be merged with readahead.

2020-09-17 Thread GitBox


steveloughran commented on a change in pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#discussion_r490146585



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -180,9 +205,13 @@ private int readOneBlock(final byte[] b, final int off, 
final int len) throws IO
 
   // Enable readAhead when reading sequentially
   if (-1 == fCursorAfterLastRead || fCursorAfterLastRead == fCursor || 
b.length >= bufferSize) {
+LOG.debug("Sequential read with read ahead size of {}", bufferSize);
 bytesRead = readInternal(fCursor, buffer, 0, bufferSize, false);
   } else {
-bytesRead = readInternal(fCursor, buffer, 0, b.length, true);
+// Enabling read ahead for random reads as well to reduce number of 
remote calls.
+int lengthWithReadAhead = Math.min(b.length + readAheadRange, 
bufferSize);
+LOG.debug("Random read with read ahead size of {}", 
lengthWithReadAhead);
+bytesRead = readInternal(fCursor, buffer, 0, lengthWithReadAhead, 
true);

Review comment:
   Based on the S3A experience (which didn't always read into a buffer, 
BTW), the "penalty" of having a large readahead range is there is more data to 
drain when you want to cancel the read (ie. a seek out of range). 
   That code does the draining in the active thread. If that were to be done in 
a background thread, the penalty of a larger readahead would be less, as you 
would only see a delay from the draining if there were no free HTTPS 
connections in the pool. Setting up a new HTTPS connection is expensive though. 
If there were no free HTTPS connections in the pool, you would be better off 
draining the stream in the active thread. Maybe.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17250) ABFS: Allow random read sizes to be of buffer size

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=485649&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485649
 ]

ASF GitHub Bot logged work on HADOOP-17250:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 10:44
Start Date: 17/Sep/20 10:44
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2307:
URL: https://github.com/apache/hadoop/pull/2307#discussion_r490146585



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -180,9 +205,13 @@ private int readOneBlock(final byte[] b, final int off, 
final int len) throws IO
 
   // Enable readAhead when reading sequentially
   if (-1 == fCursorAfterLastRead || fCursorAfterLastRead == fCursor || 
b.length >= bufferSize) {
+LOG.debug("Sequential read with read ahead size of {}", bufferSize);
 bytesRead = readInternal(fCursor, buffer, 0, bufferSize, false);
   } else {
-bytesRead = readInternal(fCursor, buffer, 0, b.length, true);
+// Enabling read ahead for random reads as well to reduce number of 
remote calls.
+int lengthWithReadAhead = Math.min(b.length + readAheadRange, 
bufferSize);
+LOG.debug("Random read with read ahead size of {}", 
lengthWithReadAhead);
+bytesRead = readInternal(fCursor, buffer, 0, lengthWithReadAhead, 
true);

Review comment:
   Based on the S3A experience (which didn't always read into a buffer, 
BTW), the "penalty" of having a large readahead range is there is more data to 
drain when you want to cancel the read (ie. a seek out of range). 
   That code does the draining in the active thread. If that were to be done in 
a background thread, the penalty of a larger readahead would be less, as you 
would only see a delay from the draining if there were no free HTTPS 
connections in the pool. Setting up a new HTTPS connection is expensive though. 
If there were no free HTTPS connections in the pool, you would be better off 
draining the stream in the active thread. Maybe.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485649)
Time Spent: 50m  (was: 40m)

> ABFS: Allow random read sizes to be of buffer size
> --
>
> Key: HADOOP-17250
> URL: https://issues.apache.org/jira/browse/HADOOP-17250
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
> for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next 
> read doesn't skip by a lot and can be served by the earlier read if read was 
> done in buffer size. As a result the job triggered a higher count of read 
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the 
> jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of 
> requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17250) ABFS: Allow random read sizes to be of buffer size

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=485651&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485651
 ]

ASF GitHub Bot logged work on HADOOP-17250:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 10:45
Start Date: 17/Sep/20 10:45
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2307:
URL: https://github.com/apache/hadoop/pull/2307#discussion_r490146585



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -180,9 +205,13 @@ private int readOneBlock(final byte[] b, final int off, 
final int len) throws IO
 
   // Enable readAhead when reading sequentially
   if (-1 == fCursorAfterLastRead || fCursorAfterLastRead == fCursor || 
b.length >= bufferSize) {
+LOG.debug("Sequential read with read ahead size of {}", bufferSize);
 bytesRead = readInternal(fCursor, buffer, 0, bufferSize, false);
   } else {
-bytesRead = readInternal(fCursor, buffer, 0, b.length, true);
+// Enabling read ahead for random reads as well to reduce number of 
remote calls.
+int lengthWithReadAhead = Math.min(b.length + readAheadRange, 
bufferSize);
+LOG.debug("Random read with read ahead size of {}", 
lengthWithReadAhead);
+bytesRead = readInternal(fCursor, buffer, 0, lengthWithReadAhead, 
true);

Review comment:
   Based on the S3A experience (which didn't always read into a buffer, 
BTW), the "penalty" of having a large readahead range is there is more data to 
drain when you want to cancel the read (ie. a seek out of range). 
   That code does the draining in the active thread. If that were to be done in 
a background thread, the penalty of a larger readahead would be less, as you 
would only see a delay from the draining if there were no free HTTPS 
connections in the pool. Setting up a new HTTPS connection is expensive though. 
If there were no free HTTPS connections in the pool, you would be better off 
draining the stream in the active thread. Maybe.
   
   
   (Disclaimer: all my claims about cost of HTTPS are based on S3 +Java7/8, and 
S3 is very slow to set up a connection. If the ADLS Gen2 store is faster to 
negotiate then it becomes a lot more justifiable to drain in a separate thread)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485651)
Time Spent: 1h  (was: 50m)

> ABFS: Allow random read sizes to be of buffer size
> --
>
> Key: HADOOP-17250
> URL: https://issues.apache.org/jira/browse/HADOOP-17250
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
> for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next 
> read doesn't skip by a lot and can be served by the earlier read if read was 
> done in buffer size. As a result the job triggered a higher count of read 
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the 
> jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of 
> requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2307: HADOOP-17250 Lot of short reads can be merged with readahead.

2020-09-17 Thread GitBox


steveloughran commented on a change in pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#discussion_r490146585



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -180,9 +205,13 @@ private int readOneBlock(final byte[] b, final int off, 
final int len) throws IO
 
   // Enable readAhead when reading sequentially
   if (-1 == fCursorAfterLastRead || fCursorAfterLastRead == fCursor || 
b.length >= bufferSize) {
+LOG.debug("Sequential read with read ahead size of {}", bufferSize);
 bytesRead = readInternal(fCursor, buffer, 0, bufferSize, false);
   } else {
-bytesRead = readInternal(fCursor, buffer, 0, b.length, true);
+// Enabling read ahead for random reads as well to reduce number of 
remote calls.
+int lengthWithReadAhead = Math.min(b.length + readAheadRange, 
bufferSize);
+LOG.debug("Random read with read ahead size of {}", 
lengthWithReadAhead);
+bytesRead = readInternal(fCursor, buffer, 0, lengthWithReadAhead, 
true);

Review comment:
   Based on the S3A experience (which didn't always read into a buffer, 
BTW), the "penalty" of having a large readahead range is there is more data to 
drain when you want to cancel the read (ie. a seek out of range). 
   That code does the draining in the active thread. If that were to be done in 
a background thread, the penalty of a larger readahead would be less, as you 
would only see a delay from the draining if there were no free HTTPS 
connections in the pool. Setting up a new HTTPS connection is expensive though. 
If there were no free HTTPS connections in the pool, you would be better off 
draining the stream in the active thread. Maybe.
   
   
   (Disclaimer: all my claims about cost of HTTPS are based on S3 +Java7/8, and 
S3 is very slow to set up a connection. If the ADLS Gen2 store is faster to 
negotiate then it becomes a lot more justifiable to drain in a separate thread)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2307: HADOOP-17250 Lot of short reads can be merged with readahead.

2020-09-17 Thread GitBox


steveloughran commented on pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#issuecomment-694153280


   Sneha, 
   What are the likely times to
   1. negotiate a new HTTPS connection
   1. read 4MB in a single ranged GET request
   1. read less that 4MB in a single ranged GET request, e.g. 2MB.
   
   If there's a fixed latency for the GET irrespective of size, then small 
reads are very inefficient per byte, reading the whole buffer would be 
justifiable. 
   
   Also: which makes for the simplest code to write. review and maintain? Let's 
not ignore that little detail, especially given my experience of shipping a 
broken implementation of this in S3AInputStream.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17250) ABFS: Allow random read sizes to be of buffer size

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=485655&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485655
 ]

ASF GitHub Bot logged work on HADOOP-17250:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 10:50
Start Date: 17/Sep/20 10:50
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#issuecomment-694153280


   Sneha, 
   What are the likely times to
   1. negotiate a new HTTPS connection
   1. read 4MB in a single ranged GET request
   1. read less that 4MB in a single ranged GET request, e.g. 2MB.
   
   If there's a fixed latency for the GET irrespective of size, then small 
reads are very inefficient per byte, reading the whole buffer would be 
justifiable. 
   
   Also: which makes for the simplest code to write. review and maintain? Let's 
not ignore that little detail, especially given my experience of shipping a 
broken implementation of this in S3AInputStream.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485655)
Time Spent: 1h 10m  (was: 1h)

> ABFS: Allow random read sizes to be of buffer size
> --
>
> Key: HADOOP-17250
> URL: https://issues.apache.org/jira/browse/HADOOP-17250
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive, pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
> for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next 
> read doesn't skip by a lot and can be served by the earlier read if read was 
> done in buffer size. As a result the job triggered a higher count of read 
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the 
> jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of 
> requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2307: HADOOP-17250 Lot of short reads can be merged with readahead.

2020-09-17 Thread GitBox


mukund-thakur commented on pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#issuecomment-694159756


   This is the output of performance benchmark done with this patch along with 
some hive tuning. 
   NOTE : Results may differ now. 
   https://user-images.githubusercontent.com/10720944/93462153-7caaae00-f903-11ea-81ca-eed62fae7858.png";>
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17250) ABFS: Allow random read sizes to be of buffer size

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=485663&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485663
 ]

ASF GitHub Bot logged work on HADOOP-17250:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 11:05
Start Date: 17/Sep/20 11:05
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#issuecomment-694159756


   This is the output of performance benchmark done with this patch along with 
some hive tuning. 
   NOTE : Results may differ now. 
   https://user-images.githubusercontent.com/10720944/93462153-7caaae00-f903-11ea-81ca-eed62fae7858.png";>
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485663)
Time Spent: 1h 20m  (was: 1h 10m)

> ABFS: Allow random read sizes to be of buffer size
> --
>
> Key: HADOOP-17250
> URL: https://issues.apache.org/jira/browse/HADOOP-17250
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive, pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
> for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next 
> read doesn't skip by a lot and can be served by the earlier read if read was 
> done in buffer size. As a result the job triggered a higher count of read 
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the 
> jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of 
> requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2308: HADOOP-17238. Add mount point path resolution cache for viewfs.

2020-09-17 Thread GitBox


hadoop-yetus commented on pull request #2308:
URL: https://github.com/apache/hadoop/pull/2308#issuecomment-694196651


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  33m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 12s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 55s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 44s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  6s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  2s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  8s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  23m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 44s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  21m 44s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 30s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 39s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 52s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  | 130m 54s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 379m  6s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
   |   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2308/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2308 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 3636c064d50b 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e4cb0d35145 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04

[jira] [Work logged] (HADOOP-17238) Add ViewFileSystem/InodeTree Mount points Resolution Cache

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17238?focusedWorklogId=485713&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485713
 ]

ASF GitHub Bot logged work on HADOOP-17238:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 12:27
Start Date: 17/Sep/20 12:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2308:
URL: https://github.com/apache/hadoop/pull/2308#issuecomment-694196651


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  33m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 12s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 55s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 44s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  6s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  2s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  8s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  23m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 44s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  21m 44s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 30s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 39s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 52s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  | 130m 54s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 379m  6s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
   |   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2308/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2308 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 3636c064d50b 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-17 Thread GitBox


hadoop-yetus removed a comment on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-693863169


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  29m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 26s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 44s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 54s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 56s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 36s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 37s |  branch/hadoop-project-dist no findbugs 
output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 49s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  20m 49s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 17 new + 146 unchanged - 
17 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  20m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |  20m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 31s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  18m 31s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 11 new + 152 unchanged - 
11 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  18m 31s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 31s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 49s |  root: The patch generated 6 new 
+ 151 unchanged - 5 fixed = 157 total (was 156)  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  5s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 34s |  hadoop-project has no data from 
findbugs  |
   | +0 :ok: |  findbugs  |   0m 35s |  hadoop-project-dist has no data from 
findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 34s |  hadoop-project-dist in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   9m 49s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 213m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2297/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2297 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml cc findbugs checkstyle golang |
   | uname | Linux a3c4311f767f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-supp

[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=485722&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485722
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 13:05
Start Date: 17/Sep/20 13:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-693863169


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  29m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 26s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 44s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 54s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 56s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 36s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 37s |  branch/hadoop-project-dist no findbugs 
output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 49s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  20m 49s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 17 new + 146 unchanged - 
17 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  20m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |  20m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 31s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  18m 31s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 11 new + 152 unchanged - 
11 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  18m 31s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 31s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 49s |  root: The patch generated 6 new 
+ 151 unchanged - 5 fixed = 157 total (was 156)  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  5s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 34s |  hadoop-project has no data from 
findbugs  |
   | +0 :ok: |  findbugs  |   0m 35s |  hadoop-project-dist has no data from 
findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 34s |  hadoop-project-dist in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   9m 49s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 213m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-

[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


Hexiaoqiao commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r490219801



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -134,6 +134,9 @@
   private final FileIoProvider fileIoProvider;
   private final DataNodeVolumeMetrics metrics;
   private URI baseURI;
+  private boolean enableSameDiskArchival;
+  private final String device;

Review comment:
   what about using `storageID` replace `device`? IMO both of them are in 
order to index single volume, right?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -412,16 +435,31 @@ long getBlockPoolUsed(String bpid) throws IOException {
*/
   @VisibleForTesting
   public long getCapacity() {
+long capacity;
 if (configuredCapacity < 0L) {
   long remaining;
   if (cachedCapacity > 0L) {
 remaining = cachedCapacity - getReserved();
   } else {
 remaining = usage.getCapacity() - getReserved();
   }
-  return Math.max(remaining, 0L);
+  capacity = Math.max(remaining, 0L);
+} else {
+  capacity = configuredCapacity;
+}
+
+if (enableSameDiskArchival) {
+  double reservedForArchival = conf.getDouble(

Review comment:
   `reservedForArchival` here is same as `this.reservedForArchival` when 
init?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -190,6 +193,26 @@
 }
 this.conf = conf;
 this.fileIoProvider = fileIoProvider;
+this.enableSameDiskArchival =
+conf.getBoolean(DFSConfigKeys.DFS_DATANODE_ALLOW_SAME_DISK_TIERING,
+DFSConfigKeys.DFS_DATANODE_ALLOW_SAME_DISK_TIERING_DEFAULT);
+if (enableSameDiskArchival) {
+  this.device = usage.getMount();
+  reservedForArchive = conf.getDouble(
+  DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE,
+  DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE_DEFAULT);
+  if (reservedForArchive >= 1) {
+FsDatasetImpl.LOG.warn("Value of reserve-for-archival is >= 100% for "
++ currentDir + ". Setting it to 99%.");
+reservedForArchive = 0.99;

Review comment:
   Why `reservedForArchive` has to less than 1 here, IIUC it means that 
this is ARCHIVE device when `reservedForArchive` set to 1. Right?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17268) Add RPC Quota to NameNode.

2020-09-17 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun reassigned HADOOP-17268:


Assignee: Jinglun

> Add RPC Quota to NameNode.
> --
>
> Key: HADOOP-17268
> URL: https://issues.apache.org/jira/browse/HADOOP-17268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
>
> Add the ability of rpc request quota to NameNode. All the requests exceeding 
> quota would end with a 'Server too busy' exception. This can prevent users 
> from overusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17268) Add RPC Quota to NameNode.

2020-09-17 Thread Jinglun (Jira)
Jinglun created HADOOP-17268:


 Summary: Add RPC Quota to NameNode.
 Key: HADOOP-17268
 URL: https://issues.apache.org/jira/browse/HADOOP-17268
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jinglun


Add the ability of rpc request quota to NameNode. All the requests exceeding 
quota would end with a 'Server too busy' exception. This can prevent users from 
overusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2307: HADOOP-17250 Lot of short reads can be merged with readahead.

2020-09-17 Thread GitBox


snvijaya commented on pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#issuecomment-694256719


   > Sneha,
   > What are the likely times to
   > 
   > 1. negotiate a new HTTPS connection
   > 2. read 4MB in a single ranged GET request
   > 3. read less that 4MB in a single ranged GET request, e.g. 2MB.
   > 
   > If there's a fixed latency for the GET irrespective of size, then small 
reads are very inefficient per byte, reading the whole buffer would be 
justifiable.
   > 
   > Also: which makes for the simplest code to write. review and maintain? 
Let's not ignore that little detail, especially given my experience of shipping 
a broken implementation of this in S3AInputStream.
   
   Hi Steve, I get your inputs and agree that observations from above points 
can validate a better config setting for readaheadrange. Let me try to see if I 
can measure up the points 1-3. Request you to give me a couple of days to get 
back.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17250) ABFS: Allow random read sizes to be of buffer size

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=485754&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485754
 ]

ASF GitHub Bot logged work on HADOOP-17250:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 14:01
Start Date: 17/Sep/20 14:01
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#issuecomment-694256719


   > Sneha,
   > What are the likely times to
   > 
   > 1. negotiate a new HTTPS connection
   > 2. read 4MB in a single ranged GET request
   > 3. read less that 4MB in a single ranged GET request, e.g. 2MB.
   > 
   > If there's a fixed latency for the GET irrespective of size, then small 
reads are very inefficient per byte, reading the whole buffer would be 
justifiable.
   > 
   > Also: which makes for the simplest code to write. review and maintain? 
Let's not ignore that little detail, especially given my experience of shipping 
a broken implementation of this in S3AInputStream.
   
   Hi Steve, I get your inputs and agree that observations from above points 
can validate a better config setting for readaheadrange. Let me try to see if I 
can measure up the points 1-3. Request you to give me a couple of days to get 
back.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485754)
Time Spent: 1.5h  (was: 1h 20m)

> ABFS: Allow random read sizes to be of buffer size
> --
>
> Key: HADOOP-17250
> URL: https://issues.apache.org/jira/browse/HADOOP-17250
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive, pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
> for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next 
> read doesn't skip by a lot and can be served by the earlier read if read was 
> done in buffer size. As a result the job triggered a higher count of read 
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the 
> jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of 
> requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17268) Add RPC Quota to NameNode.

2020-09-17 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HADOOP-17268:
-
Attachment: HADOOP-17268.001.patch
Status: Patch Available  (was: Open)

> Add RPC Quota to NameNode.
> --
>
> Key: HADOOP-17268
> URL: https://issues.apache.org/jira/browse/HADOOP-17268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-17268.001.patch
>
>
> Add the ability of rpc request quota to NameNode. All the requests exceeding 
> quota would end with a 'Server too busy' exception. This can prevent users 
> from overusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17266) Sudo in hadoop-functions.sh should preserve environment variables

2020-09-17 Thread Chengbing Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17197723#comment-17197723
 ] 

Chengbing Liu commented on HADOOP-17266:


[~aajisaka] Many people including us use RPM-based Hadoop distribution, such as 
CDH. These distributions are mostly built by something like Apache Bigtop, 
which defines lib directory as follows: (refer to 
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hadoop/SPECS/hadoop.spec
 )
{code}
%define lib_hadoop_dirname /usr/lib
%define lib_hadoop %{lib_hadoop_dirname}/%{name}
%define lib_hdfs %{lib_hadoop_dirname}/%{name}-hdfs
%define lib_yarn %{lib_hadoop_dirname}/%{name}-yarn
%define lib_mapreduce %{lib_hadoop_dirname}/%{name}-mapreduce
%define libexecdir /usr/lib
{code}

I think {{bin/hdfs}} script does not necessarily assume the directory structure 
as you have mentioned, otherwise there's no point to firstly check 
{{HADOOP_HOME}}, which allows a different directory structure.

{quote}If '-E' option is enabled, unnecessary environment variables can be 
preserved.{quote}
As for your concern, I think the config scripts (such as hdfs-config.sh) must 
be sure to set all environments that are used and make sure that Hadoop scripts 
does not use any irrelevant "user environment variable".

Btw, many people work around this issue simply by not setting 
HDFS_NAMENODE_USER, so that no user check will be made and no su/sudo will be 
performed. The deamon scripts are actually run with those "unnecessary env 
variables", which seem to introduce no problems.

> Sudo in hadoop-functions.sh should preserve environment variables 
> --
>
> Key: HADOOP-17266
> URL: https://issues.apache.org/jira/browse/HADOOP-17266
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.3.0
>Reporter: Chengbing Liu
>Priority: Major
> Attachments: HADOOP-17266.001.patch
>
>
> Steps to reproduce:
> 1. Set {{HDFS_NAMENODE_USER=hdfs}} in {{/etc/default/hadoop-hdfs-namenode}} 
> to enable user check (and switch to {{hdfs}} to start/stop NameNode daemon)
> 2. Stop NameNode with: {{service hadoop-hdfs-namenode stop}}
> 3. Got an error and NameNode is not stopped
> {noformat}
> ERROR: Cannot execute /usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh.
> Failed to stop Hadoop namenode. Return value: 1. [FAILED]
> {noformat}
> The root cause is that after sudo, {{HADOOP_HOME=/usr/lib/hadoop}} is not 
> preserved, and {{/usr/lib/hadoop-hdfs/bin/hdfs}} locates libexec by the 
> following logic:
> {noformat}
> # let's locate libexec...
> if [[ -n "${HADOOP_HOME}" ]]; then
>   HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
> else
>   bin=$(cd -P -- "$(dirname -- "${MYNAME}")" >/dev/null && pwd -P)
>   HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
> fi
> {noformat}
> I believe the key point here is that we should preserve environment variables 
> when doing sudo.
> Note that this bug is not introduced by HDFS-15353, before which {{su -l}} is 
> used, which will also discard environment variables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17126) implement non-guava Precondition checkNotNull

2020-09-17 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17197760#comment-17197760
 ] 

Ahmed Hussein commented on HADOOP-17126:


[~ste...@apache.org] Can you please take a look at this small patch?

It adds \{{checkNotNull}} considering the [comments made on the PR-2134 | 
https://github.com/apache/hadoop/pull/2134#pullrequestreview-447564488]
{quote}This is going to be a big piece of work. not just the coding, but the 
merging and backporting.
 * we are going to have to backport the noguava package back many versions of 
releases, so that then cherrypicking is easy. That doesn't need any other 
changes to go...it should be a separate patch
 * I'm not sure about "noguava" as a name. "unguava"?
 * package must be scoped private/unstable
 * we need separate checks for state and arg, as different exceptions are 
raised.{quote}

> implement non-guava Precondition checkNotNull
> -
>
> Key: HADOOP-17126
> URL: https://issues.apache.org/jira/browse/HADOOP-17126
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch
>
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}.
>  +The plan is as follows+
>  * create a new {{package org.apache.hadoop.util.unguava;}}
>  * {{create class Validate}}
>  * implement  {{package org.apache.hadoop.util.unguava.Validate;}} with the 
> following interface
>  ** {{checkNotNull(final T obj)}}
>  ** {{checkNotNull(final T reference, final Object errorMessage)}}
>  ** {{checkNotNull(final T obj, final String message, final Object... 
> values)}}
>  ** {{checkNotNull(final T obj,final Supplier msgSupplier)}}
>  * {{guava.preconditions used String.lenientformat which suppressed 
> exceptions caused by string formatting of the exception message . So, in 
> order to avoid changing the behavior, the implementation catches Exceptions 
> triggered by building the message (IllegalFormat, InsufficientArg, 
> NullPointer..etc)}}
>  * {{After merging the new class, we can replace 
> guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull
>  * We need the change to go into trunk, 3.1, 3.2, and 3.3
>  
> Similar Jiras will be created to implement checkState, checkArgument, 
> checkIndex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17023) Tune listStatus() api of s3a.

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17023?focusedWorklogId=485802&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485802
 ]

ASF GitHub Bot logged work on HADOOP-17023:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 15:28
Start Date: 17/Sep/20 15:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2257:
URL: https://github.com/apache/hadoop/pull/2257#issuecomment-694312667


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 12s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 21s |  hadoop-tools/hadoop-aws: The 
patch generated 4 new + 62 unchanged - 0 fixed = 66 total (was 62)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 25s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 
4 unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 20s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  71m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2257 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 177d1c552191 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 20a0e6278d6 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   |  Test Results | 
https://ci-

[GitHub] [hadoop] hadoop-yetus commented on pull request #2257: HADOOP-17023 Tune S3AFileSystem.listStatus() api.

2020-09-17 Thread GitBox


hadoop-yetus commented on pull request #2257:
URL: https://github.com/apache/hadoop/pull/2257#issuecomment-694312667


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 12s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 21s |  hadoop-tools/hadoop-aws: The 
patch generated 4 new + 62 unchanged - 0 fixed = 66 total (was 62)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 25s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 
4 unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 20s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  71m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2257 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 177d1c552191 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 20a0e6278d6 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/testReport/ |
   | Max. process+thread count | 456 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   

[jira] [Work logged] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17208?focusedWorklogId=485878&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485878
 ]

ASF GitHub Bot logged work on HADOOP-17208:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 17:39
Start Date: 17/Sep/20 17:39
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao merged pull request #2259:
URL: https://github.com/apache/hadoop/pull/2259


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485878)
Time Spent: 2h 20m  (was: 2h 10m)

> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao merged pull request #2259: HADOOP-17208. LoadBalanceKMSClientProvider#deleteKey should invalidat…

2020-09-17 Thread GitBox


xiaoyuyao merged pull request #2259:
URL: https://github.com/apache/hadoop/pull/2259


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17208) LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all KMSClientProvider instances

2020-09-17 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-17208:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks all for the reviews and discussions. I've merged the change. 

> LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
> KMSClientProvider instances
> -
>
> Key: HADOOP-17208
> URL: https://issues.apache.org/jira/browse/HADOOP-17208
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Without invalidateCache, the deleted key may still exists in the servers' key 
> cache (CachingKeyProvider in KMSWebApp.java)  where the delete key was not 
> hit. Client may still be able to access encrypted files by specifying to 
> connect to KMS instances with a cached version of the deleted key before the 
> cache entry (10 min by default) expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17265) ABFS: Support for Client Correlation ID

2020-09-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17197871#comment-17197871
 ] 

Steve Loughran commented on HADOOP-17265:
-

any public docs on this? That is -what does a valid ID look like, and how to 
read it in the logs?

+[~mehakmeetSingh]

> ABFS: Support for Client Correlation ID
> ---
>
> Key: HADOOP-17265
> URL: https://issues.apache.org/jira/browse/HADOOP-17265
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sumangala Patki
>Priority: Major
>  Labels: abfsactive
>
> Introducing a client correlation ID that appears in the Azure diagnostic logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17269) [JDK 11] False-positive findbugs warnings

2020-09-17 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17269:
--

 Summary: [JDK 11] False-positive findbugs warnings
 Key: HADOOP-17269
 URL: https://issues.apache.org/jira/browse/HADOOP-17269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Akira Ajisaka


In Java 11, there are a lot of false-positive findbugs warnings in 
try-with-resources.
Ref: 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/1/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html

This issue has been fixed by https://github.com/spotbugs/spotbugs/pull/1248 but 
now there are no releases that include this fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17269) [JDK 11] False-positive findbugs warnings

2020-09-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17197912#comment-17197912
 ] 

Akira Ajisaka commented on HADOOP-17269:


I'd like to upgrade SpotBugs version if the new version (4.1.3?) is released.

> [JDK 11] False-positive findbugs warnings
> -
>
> Key: HADOOP-17269
> URL: https://issues.apache.org/jira/browse/HADOOP-17269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> In Java 11, there are a lot of false-positive findbugs warnings in 
> try-with-resources.
> Ref: 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/1/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
> This issue has been fixed by https://github.com/spotbugs/spotbugs/pull/1248 
> but now there are no releases that include this fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-17 Thread GitBox


viirya commented on a change in pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#discussion_r490596761



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##
@@ -48,24 +48,6 @@
   private long bytesRead = 0L;
   private long bytesWritten = 0L;
 
-  private static boolean nativeSnappyLoaded = false;
-  
-  static {
-if (NativeCodeLoader.isNativeCodeLoaded() &&
-NativeCodeLoader.buildSupportsSnappy()) {
-  try {
-initIDs();
-nativeSnappyLoaded = true;
-  } catch (Throwable t) {
-LOG.error("failed to load SnappyCompressor", t);
-  }
-}
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-return nativeSnappyLoaded;

Review comment:
   added.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on a change in pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-17 Thread GitBox


viirya commented on a change in pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#discussion_r490596868



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##
@@ -45,30 +46,19 @@
   private int userBufOff = 0, userBufLen = 0;
   private boolean finished;
 
-  private static boolean nativeSnappyLoaded = false;
-
-  static {
-if (NativeCodeLoader.isNativeCodeLoaded() &&
-NativeCodeLoader.buildSupportsSnappy()) {
-  try {
-initIDs();
-nativeSnappyLoaded = true;
-  } catch (Throwable t) {
-LOG.error("failed to load SnappyDecompressor", t);
-  }
-}
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-return nativeSnappyLoaded;
-  }
-  
   /**
* Creates a new compressor.
*
* @param directBufferSize size of the direct buffer to be used.
*/
   public SnappyDecompressor(int directBufferSize) {
+// `snappy-java` is provided scope. We need to check if its availability.
+try {
+  SnappyLoader.getVersion();
+} catch (Throwable t) {
+  LOG.warn("Error loading snappy libraries: " + t);

Review comment:
   ok, changed to throw `RuntimeException`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=485980&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485980
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 22:28
Start Date: 17/Sep/20 22:28
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#discussion_r490596761



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##
@@ -48,24 +48,6 @@
   private long bytesRead = 0L;
   private long bytesWritten = 0L;
 
-  private static boolean nativeSnappyLoaded = false;
-  
-  static {
-if (NativeCodeLoader.isNativeCodeLoaded() &&
-NativeCodeLoader.buildSupportsSnappy()) {
-  try {
-initIDs();
-nativeSnappyLoaded = true;
-  } catch (Throwable t) {
-LOG.error("failed to load SnappyCompressor", t);
-  }
-}
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-return nativeSnappyLoaded;

Review comment:
   added.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485980)
Time Spent: 12h 20m  (was: 12h 10m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 12h 20m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=485981&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485981
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 17/Sep/20 22:29
Start Date: 17/Sep/20 22:29
Worklog Time Spent: 10m 
  Work Description: viirya commented on a change in pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#discussion_r490596868



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##
@@ -45,30 +46,19 @@
   private int userBufOff = 0, userBufLen = 0;
   private boolean finished;
 
-  private static boolean nativeSnappyLoaded = false;
-
-  static {
-if (NativeCodeLoader.isNativeCodeLoaded() &&
-NativeCodeLoader.buildSupportsSnappy()) {
-  try {
-initIDs();
-nativeSnappyLoaded = true;
-  } catch (Throwable t) {
-LOG.error("failed to load SnappyDecompressor", t);
-  }
-}
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-return nativeSnappyLoaded;
-  }
-  
   /**
* Creates a new compressor.
*
* @param directBufferSize size of the direct buffer to be used.
*/
   public SnappyDecompressor(int directBufferSize) {
+// `snappy-java` is provided scope. We need to check if its availability.
+try {
+  SnappyLoader.getVersion();
+} catch (Throwable t) {
+  LOG.warn("Error loading snappy libraries: " + t);

Review comment:
   ok, changed to throw `RuntimeException`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 485981)
Time Spent: 12.5h  (was: 12h 20m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-17 Thread GitBox


hadoop-yetus commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-694592777


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 58s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 10s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 37s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 35s |  branch/hadoop-project-dist no findbugs 
output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  1s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  19m 33s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 38 new + 125 unchanged - 
38 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  19m 33s |  the patch passed  |
   | +1 :green_heart: |  javac  |  19m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 48s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  16m 48s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 43 new + 120 unchanged - 
43 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  16m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 48s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 48s |  root: The patch generated 6 new 
+ 151 unchanged - 5 fixed = 157 total (was 156)  |
   | +1 :green_heart: |  mvnsite  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  4s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  6s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 36s |  hadoop-project has no data from 
findbugs  |
   | +0 :ok: |  findbugs  |   0m 35s |  hadoop-project-dist has no data from 
findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 34s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 35s |  hadoop-project-dist in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   9m 24s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 177m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2297/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2297 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml cc findbugs checkstyle golang |
   | uname | Linux a5f6c2c9d4c1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/

[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=486023&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486023
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 18/Sep/20 01:21
Start Date: 18/Sep/20 01:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-694592777


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 58s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 10s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 37s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 35s |  branch/hadoop-project-dist no findbugs 
output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  1s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  19m 33s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 38 new + 125 unchanged - 
38 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  19m 33s |  the patch passed  |
   | +1 :green_heart: |  javac  |  19m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 48s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  16m 48s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 43 new + 120 unchanged - 
43 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  16m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 48s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 48s |  root: The patch generated 6 new 
+ 151 unchanged - 5 fixed = 157 total (was 156)  |
   | +1 :green_heart: |  mvnsite  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  4s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  6s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 36s |  hadoop-project has no data from 
findbugs  |
   | +0 :ok: |  findbugs  |   0m 35s |  hadoop-project-dist has no data from 
findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 34s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 35s |  hadoop-project-dist in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   9m 24s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 177m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibra

[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r490652856



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -412,16 +435,31 @@ long getBlockPoolUsed(String bpid) throws IOException {
*/
   @VisibleForTesting
   public long getCapacity() {
+long capacity;
 if (configuredCapacity < 0L) {
   long remaining;
   if (cachedCapacity > 0L) {
 remaining = cachedCapacity - getReserved();
   } else {
 remaining = usage.getCapacity() - getReserved();
   }
-  return Math.max(remaining, 0L);
+  capacity = Math.max(remaining, 0L);
+} else {
+  capacity = configuredCapacity;
+}
+
+if (enableSameDiskArchival) {
+  double reservedForArchival = conf.getDouble(

Review comment:
   Oh, this is a mistake, thanks for the catch!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r490652856



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -412,16 +435,31 @@ long getBlockPoolUsed(String bpid) throws IOException {
*/
   @VisibleForTesting
   public long getCapacity() {
+long capacity;
 if (configuredCapacity < 0L) {
   long remaining;
   if (cachedCapacity > 0L) {
 remaining = cachedCapacity - getReserved();
   } else {
 remaining = usage.getCapacity() - getReserved();
   }
-  return Math.max(remaining, 0L);
+  capacity = Math.max(remaining, 0L);
+} else {
+  capacity = configuredCapacity;
+}
+
+if (enableSameDiskArchival) {
+  double reservedForArchival = conf.getDouble(

Review comment:
   Oh, this is a mistake, thanks for the catch!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r490657266



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -412,16 +435,31 @@ long getBlockPoolUsed(String bpid) throws IOException {
*/
   @VisibleForTesting
   public long getCapacity() {
+long capacity;
 if (configuredCapacity < 0L) {
   long remaining;
   if (cachedCapacity > 0L) {
 remaining = cachedCapacity - getReserved();
   } else {
 remaining = usage.getCapacity() - getReserved();
   }
-  return Math.max(remaining, 0L);
+  capacity = Math.max(remaining, 0L);
+} else {
+  capacity = configuredCapacity;
+}
+
+if (enableSameDiskArchival) {
+  double reservedForArchival = conf.getDouble(

Review comment:
   Oh yeah my mistake here, thanks for the catch!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r490657364



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -134,6 +134,9 @@
   private final FileIoProvider fileIoProvider;
   private final DataNodeVolumeMetrics metrics;
   private URI baseURI;
+  private boolean enableSameDiskArchival;
+  private final String device;

Review comment:
   The "device" here is the string value of the filesystem mount point. I 
wanted to use it to keep track of which two volumes are on the same mount (thus 
the same disk). Datanode can use the existing DF#getMount() to detect it 
automatically.
   I can probably change the name to "mount" to make it more clear.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


LeonGao91 commented on a change in pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#discussion_r490657507



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
##
@@ -190,6 +193,26 @@
 }
 this.conf = conf;
 this.fileIoProvider = fileIoProvider;
+this.enableSameDiskArchival =
+conf.getBoolean(DFSConfigKeys.DFS_DATANODE_ALLOW_SAME_DISK_TIERING,
+DFSConfigKeys.DFS_DATANODE_ALLOW_SAME_DISK_TIERING_DEFAULT);
+if (enableSameDiskArchival) {
+  this.device = usage.getMount();
+  reservedForArchive = conf.getDouble(
+  DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE,
+  DFSConfigKeys.DFS_DATANODE_RESERVE_FOR_ARCHIVE_PERCENTAGE_DEFAULT);
+  if (reservedForArchive >= 1) {
+FsDatasetImpl.LOG.warn("Value of reserve-for-archival is >= 100% for "
++ currentDir + ". Setting it to 99%.");
+reservedForArchive = 0.99;

Review comment:
   Yeah I think you are right, I will update and make this at most 1.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


LeonGao91 commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-694612231


   > Thanks @LeonGao91 for your works, some comment inline.
   > I wonder that if someone could config more than one archive path at one 
device (for some reason or mis-config), then it may not work correct, right? 
Which works fine for logic disk in my opinion although it is not recommended. 
Thanks.
   
   Thanks for the review! @Hexiaoqiao 
   
   I think this feature is mostly useful if users don't want to setup Linux 
level partitions to divide DISK/ARCHIVE, in which the size of partitions is 
difficult to change in production. 
   
   For the question:
   1) It checks the underling filesystem mount to identify if two volumes are 
on the same mount, instead of the real physical disk. So it should work if the 
mount on a logical partition.
   The reason is datanode uses DF to calculate capacity-related information, 
which is on the filesystem mount level. This patch is making sure the capacity 
of DISK/ARCHIVE volume is correctly calculated and reported.
   
   2) If users mistakenly configured multiple archive paths on the same mount, 
it will throw an error msg (as per [this 
line).](https://github.com/apache/hadoop/pull/2288/files#diff-8aa3c5049e8a5394bea1aa107dd87d30R339)
 But yes the capacity will not be reported correctly in this case. Please let 
me know what do you think, we can probably just exit DN and let users fix the 
config.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


Hexiaoqiao commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-694635082


   > 2. If users mistakenly configured multiple archive paths on the same 
mount, it will throw an error msg (as per [this 
line).](https://github.com/apache/hadoop/pull/2288/files#diff-8aa3c5049e8a5394bea1aa107dd87d30R339)
 But yes the capacity will not be reported correctly in this case. Please let 
me know what do you think, we can probably just exit DN and let users fix the 
config.
   
   Thanks @LeonGao91 , this is indeed my concern, I think only log is not 
proper way, because the following logic will be not correct, especially the 
capacity remains if mis-config. FYI, Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao edited a comment on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


Hexiaoqiao edited a comment on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-694635082


   > 2. If users mistakenly configured multiple archive paths on the same 
mount, it will throw an error msg (as per [this 
line).](https://github.com/apache/hadoop/pull/2288/files#diff-8aa3c5049e8a5394bea1aa107dd87d30R339)
 But yes the capacity will not be reported correctly in this case. Please let 
me know what do you think, we can probably just exit DN and let users fix the 
config.
   
   Thanks @LeonGao91 , this is indeed my concern, I think only log is not 
proper way, because the following logic will be not correct, especially the 
capacity remains if mis-config. IMO, DataNode instance exit probably more 
graceful.  FYI, Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17270) Fix testCompressorDecompressorWithExeedBufferLimit to cover the intended scenario

2020-09-17 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-17270:
-

 Summary: Fix testCompressorDecompressorWithExeedBufferLimit to 
cover the intended scenario
 Key: HADOOP-17270
 URL: https://issues.apache.org/jira/browse/HADOOP-17270
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


The input data must be greater than internal buffer of Compressor/Decompressor 
as the test name implies. It must use strategy covering 
compression/decompression of multiple blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


LeonGao91 commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-694639539


   > Thanks @LeonGao91 , this is indeed my concern, I think only log is not 
proper way, because the following logic will be not correct, especially the 
capacity remains if mis-config. IMO, DataNode instance exit probably more 
graceful. FYI, Thanks.
   
   That makes sense, I will make the change accordingly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request #2311: HADOOP-17270. Fix testCompressorDecompressorWithExeedBufferLimit to c…

2020-09-17 Thread GitBox


iwasakims opened a new pull request #2311:
URL: https://github.com/apache/hadoop/pull/2311


   [Link to HADOOP-17270](https://issues.apache.org/jira/browse/HADOOP-17270)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17270) Fix testCompressorDecompressorWithExeedBufferLimit to cover the intended scenario

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17270:

Labels: pull-request-available  (was: )

> Fix testCompressorDecompressorWithExeedBufferLimit to cover the intended 
> scenario
> -
>
> Key: HADOOP-17270
> URL: https://issues.apache.org/jira/browse/HADOOP-17270
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The input data must be greater than internal buffer of 
> Compressor/Decompressor as the test name implies. It must use strategy 
> covering compression/decompression of multiple blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17270) Fix testCompressorDecompressorWithExeedBufferLimit to cover the intended scenario

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17270?focusedWorklogId=486058&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486058
 ]

ASF GitHub Bot logged work on HADOOP-17270:
---

Author: ASF GitHub Bot
Created on: 18/Sep/20 04:25
Start Date: 18/Sep/20 04:25
Worklog Time Spent: 10m 
  Work Description: iwasakims opened a new pull request #2311:
URL: https://github.com/apache/hadoop/pull/2311


   [Link to HADOOP-17270](https://issues.apache.org/jira/browse/HADOOP-17270)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 486058)
Remaining Estimate: 0h
Time Spent: 10m

> Fix testCompressorDecompressorWithExeedBufferLimit to cover the intended 
> scenario
> -
>
> Key: HADOOP-17270
> URL: https://issues.apache.org/jira/browse/HADOOP-17270
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The input data must be greater than internal buffer of 
> Compressor/Decompressor as the test name implies. It must use strategy 
> covering compression/decompression of multiple blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2

2020-09-17 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17198109#comment-17198109
 ] 

Masatake Iwasaki commented on HADOOP-17144:
---

{quote}
we do have a test case similar to this scenario in 
TestCompressorDecompressor#testCompressorDecompressorWithExeedBufferLimit , 
modified the lz4 constructors to use default buffer size 
{quote}

[~hemanthboyina] Interestingly the test case does not cover the intended 
scenario. I filed HADOOP-17270 for that. The 004 patch fails on the updated 
test case. Reverting the change below fixed the failure.

{noformat}
@@ -120,7 +121,7 @@ public synchronized void setInput(byte[] b, int off, int 
len) {
* consumed.
*/
   synchronized void setInputFromSavedData() {
-compressedDirectBufLen = Math.min(userBufLen, directBufferSize);
+compressedDirectBufLen = userBufLen;

 // Reinitialize lz4's input direct buffer
 compressedDirectBuf.rewind();
{noformat}


> Update Hadoop's lz4 to v1.9.2
> -
>
> Key: HADOOP-17144
> URL: https://issues.apache.org/jira/browse/HADOOP-17144
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, 
> HADOOP-17144.003.patch, HADOOP-17144.004.patch
>
>
> Update hadoop's native lz4 to v1.9.2 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao opened a new pull request #2312: HDFS-15585: ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-17 Thread GitBox


umamaheswararao opened a new pull request #2312:
URL: https://github.com/apache/hadoop/pull/2312


   https://issues.apache.org/jira/browse/HDFS-15585



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-17 Thread GitBox


viirya commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-694659679


   Regarding the change of mvn dependency.
   
   For Apache Hadoop Common, the diff is:
   
   Before:
   ```
   [INFO] +- org.apache.avro:avro:jar:1.7.7:compile
   [INFO] |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
   [INFO] |  \- org.xerial.snappy:snappy-java:jar:1.0.5:compile
   ```
   
   After:
   ```
   [INFO] +- org.apache.avro:avro:jar:1.7.7:compile
   [INFO] |  \- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
   ...
   [INFO] \- org.xerial.snappy:snappy-java:jar:1.1.7.7:provided
   ```
   
   For other modules, the change is the same.
   
   Like Apache Hadoop NFS:
   
   Before:
   ```
   [INFO] |  +- org.apache.avro:avro:jar:1.7.7:provided
   [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:provided
   [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.0.5:provided
   ```
   
   After:
   ```
   [INFO] |  +- org.apache.avro:avro:jar:1.7.7:provided
   [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:provided
   [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.1.7.7:provided
   ```
   
   Or like Apache Hadoop KMS:
   
   Before:
   ```
   [INFO] |  +- org.apache.avro:avro:jar:1.7.7:compile
   [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
   [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.0.5:compile
   ```
   
   After:
   ```
   [INFO] |  +- org.apache.avro:avro:jar:1.7.7:compile
   [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
   [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.1.7.7:compile
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=486064&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486064
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 18/Sep/20 05:27
Start Date: 18/Sep/20 05:27
Worklog Time Spent: 10m 
  Work Description: viirya commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-694659679


   Regarding the change of mvn dependency.
   
   For Apache Hadoop Common, the diff is:
   
   Before:
   ```
   [INFO] +- org.apache.avro:avro:jar:1.7.7:compile
   [INFO] |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
   [INFO] |  \- org.xerial.snappy:snappy-java:jar:1.0.5:compile
   ```
   
   After:
   ```
   [INFO] +- org.apache.avro:avro:jar:1.7.7:compile
   [INFO] |  \- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
   ...
   [INFO] \- org.xerial.snappy:snappy-java:jar:1.1.7.7:provided
   ```
   
   For other modules, the change is the same.
   
   Like Apache Hadoop NFS:
   
   Before:
   ```
   [INFO] |  +- org.apache.avro:avro:jar:1.7.7:provided
   [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:provided
   [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.0.5:provided
   ```
   
   After:
   ```
   [INFO] |  +- org.apache.avro:avro:jar:1.7.7:provided
   [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:provided
   [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.1.7.7:provided
   ```
   
   Or like Apache Hadoop KMS:
   
   Before:
   ```
   [INFO] |  +- org.apache.avro:avro:jar:1.7.7:compile
   [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
   [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.0.5:compile
   ```
   
   After:
   ```
   [INFO] |  +- org.apache.avro:avro:jar:1.7.7:compile
   [INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
   [INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.1.7.7:compile
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 486064)
Time Spent: 12h 50m  (was: 12h 40m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 12h 50m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17270) Fix testCompressorDecompressorWithExeedBufferLimit to cover the intended scenario

2020-09-17 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-17270:
--
Status: Patch Available  (was: Open)

> Fix testCompressorDecompressorWithExeedBufferLimit to cover the intended 
> scenario
> -
>
> Key: HADOOP-17270
> URL: https://issues.apache.org/jira/browse/HADOOP-17270
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The input data must be greater than internal buffer of 
> Compressor/Decompressor as the test name implies. It must use strategy 
> covering compression/decompression of multiple blocks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-17 Thread GitBox


hadoop-yetus commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-694690394


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |   1m 10s |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 602 
unchanged - 0 fixed = 604 total (was 602)  |
   | +1 :green_heart: |  compile  |   1m  4s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |   1m  4s |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 2 new 
+ 586 unchanged - 0 fixed = 588 total (was 586)  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 110m 40s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 199m 27s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2288 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux e52145dca4b9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eacbe07b565 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/4/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.0