[GitHub] [hadoop] umamaheswararao edited a comment on pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2020-09-04 Thread GitBox


umamaheswararao edited a comment on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687277527


   Thanks @steveloughran for review!
   In addition to his comments, I have the following question/comment.
   In practice, I am thinking this lazy initialization will not give us real 
benefit in the following case with MR/YARN.
   In DelegationTokenIssuer#collectDelegationTokens, it will try to get tokens 
from all children.
   
   ```
   // Now collect the tokens from the children.
   final DelegationTokenIssuer[] ancillary =
   issuer.getAdditionalTokenIssuers();
   if (ancillary != null) {
 for (DelegationTokenIssuer subIssuer : ancillary) {
   collectDelegationTokens(subIssuer, renewer, credentials, tokens);
 }
   } 
   ```
   
   I am wondering this call will make all fs to be initialized. That means we 
are not getting this lazy initialization benefit fully. 
   Did you consider this case? 
   Is there a way to avoid this ? MR/YARN team can suggest something if we have 
alternative way to load tokens. 
   CC: @wangdatan  @sunilgovind @rohithsharmaks 
   CC: @xiaoyuyao 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=479301&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479301
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 05/Sep/20 06:18
Start Date: 05/Sep/20 06:18
Worklog Time Spent: 10m 
  Work Description: umamaheswararao edited a comment on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687277527


   Thanks @steveloughran for review!
   In addition to his comments, I have the following question/comment.
   In practice, I am thinking this lazy initialization will not give us real 
benefit in the following case with MR/YARN.
   In DelegationTokenIssuer#collectDelegationTokens, it will try to get tokens 
from all children.
   
   ```
   // Now collect the tokens from the children.
   final DelegationTokenIssuer[] ancillary =
   issuer.getAdditionalTokenIssuers();
   if (ancillary != null) {
 for (DelegationTokenIssuer subIssuer : ancillary) {
   collectDelegationTokens(subIssuer, renewer, credentials, tokens);
 }
   } 
   ```
   
   I am wondering this call will make all fs to be initialized. That means we 
are not getting this lazy initialization benefit fully. 
   Did you consider this case? 
   Is there a way to avoid this ? MR/YARN team can suggest something if we have 
alternative way to load tokens. 
   CC: @wangdatan  @sunilgovind @rohithsharmaks 
   CC: @xiaoyuyao 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479301)
Time Spent: 2h 10m  (was: 2h)

> ViewFS should initialize target filesystems lazily
> --
>
> Key: HADOOP-17028
> URL: https://issues.apache.org/jira/browse/HADOOP-17028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: client-mounts, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Currently viewFS initialize all configured target filesystems when 
> viewfs#init itself.
> Some target file system initialization involve creating heavy objects and 
> proxy connections. Ex: DistributedFileSystem#initialize will create DFSClient 
> object which will create proxy connections to NN etc.
> For example: if ViewFS configured with 10 target fs with hdfs uri and 2 
> targets with s3a.
> If one of the client only work with s3a target, But ViewFS will initialize 
> all targets irrespective of what clients interested to work with. That means, 
> here client will create 10 DFS initializations and 2 s3a initializations. Its 
> unnecessary to have DFS initialization here. So, it will be a good idea to 
> initialize the target fs only when first time usage call come to particular 
> target fs scheme. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479289&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479289
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 05/Sep/20 01:39
Start Date: 05/Sep/20 01:39
Worklog Time Spent: 10m 
  Work Description: smengcl merged pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479289)
Time Spent: 1.5h  (was: 1h 20m)

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17245:

Status: Patch Available  (was: Open)

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17245:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17245:

Fix Version/s: 1.1.0

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-17245:

Fix Version/s: (was: 1.1.0)
   3.4.0

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl merged pull request #2276: HADOOP-17245. Add OzoneFileSystem classes to core-default.xml.

2020-09-04 Thread GitBox


smengcl merged pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479285&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479285
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 05/Sep/20 00:24
Start Date: 05/Sep/20 00:24
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687488974


   +1 thanks @bharatviswa504 for the patch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479285)
Time Spent: 1h 20m  (was: 1h 10m)

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2276: HADOOP-17245. Add OzoneFileSystem classes to core-default.xml.

2020-09-04 Thread GitBox


umamaheswararao commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687488974


   +1 thanks @bharatviswa504 for the patch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479284&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479284
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 23:42
Start Date: 04/Sep/20 23:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687473231


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 41s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 31s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 47s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  81m 16s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  5s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 22s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 152m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 384fd79031f9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/4/testReport/ |
   | Max. process+thread count | 2249 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/4/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi

[GitHub] [hadoop] hadoop-yetus commented on pull request #2276: HADOOP-17245. Add OzoneFileSystem classes to core-default.xml.

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687473231


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 41s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 31s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 47s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  81m 16s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  5s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 22s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 152m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 384fd79031f9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/4/testReport/ |
   | Max. process+thread count | 2249 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/4/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479274&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479274
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 21:19
Start Date: 04/Sep/20 21:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687391327


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 45s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m 45s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  97m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  23m 27s |  the patch passed  |
   | -1 :x: |  compile  |  10m  1s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |  10m  1s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 26s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  asflicense  |   0m 49s |  The patch generated 2 ASF License 
warnings.  |
   |  |   | 168m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 335a00a1a134 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/testReport/ |
   | asflicense | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/o

[GitHub] [hadoop] hadoop-yetus commented on pull request #2276: HADOOP-17245. Add OzoneFileSystem classes to core-default.xml.

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687391327


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 45s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m 45s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  97m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  23m 27s |  the patch passed  |
   | -1 :x: |  compile  |  10m  1s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |  10m  1s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 26s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  asflicense  |   0m 49s |  The patch generated 2 ASF License 
warnings.  |
   |  |   | 168m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 335a00a1a134 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/testReport/ |
   | asflicense | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 3248 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message wa

[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479273&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479273
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 21:09
Start Date: 04/Sep/20 21:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687385993


   Thank You @umamaheswararao for the review.
   Fixed the comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479273)
Time Spent: 50m  (was: 40m)

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on pull request #2276: HADOOP-17245. Add OzoneFileSystem classes to core-default.xml.

2020-09-04 Thread GitBox


bharatviswa504 commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687385993


   Thank You @umamaheswararao for the review.
   Fixed the comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479267&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479267
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 20:41
Start Date: 04/Sep/20 20:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687372041


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 34s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 26s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  compile  |  44m 16s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  mvnsite  |   0m 36s |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  60m 48s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-common in trunk failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 12s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  compile  |  18m 40s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  18m 40s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2060 new + 0 unchanged - 
0 fixed = 2060 total (was 0)  |
   | +1 :green_heart: |  compile  |  16m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 50s |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m 33s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 18s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-common in the patch failed with 
JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 33s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 120m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux b3aa9da629f0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-javadoc-

[GitHub] [hadoop] hadoop-yetus commented on pull request #2276: HADOOP-17245. Add OzoneFileSystem classes to core-default.xml.

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687372041


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 34s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 26s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | +1 :green_heart: |  compile  |  44m 16s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  mvnsite  |   0m 36s |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  60m 48s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-common in trunk failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 12s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  compile  |  18m 40s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  18m 40s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2060 new + 0 unchanged - 
0 fixed = 2060 total (was 0)  |
   | +1 :green_heart: |  compile  |  16m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 50s |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m 33s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 18s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-common in the patch failed with 
JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 33s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 120m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux b3aa9da629f0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/3/artifact/out/patch-mvninstall-hadoop-common-proje

[GitHub] [hadoop] hadoop-yetus commented on pull request #2276: HADOOP-17245. Add OzoneFileSystem classes to core-default.xml.

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687366530


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  17m 25s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 34s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 27s |  root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  mvnsite  |   0m 30s |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  37m 11s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 13s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  24m 13s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2056 new + 0 unchanged - 
0 fixed = 2056 total (was 0)  |
   | +1 :green_heart: |  compile  |  17m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  17m 26s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1949 new + 0 unchanged - 
0 fixed = 1949 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 26s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 115m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 980a75217b31 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/

[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479263&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479263
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 20:28
Start Date: 04/Sep/20 20:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#issuecomment-687366530


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  17m 25s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 34s |  root in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 27s |  root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  mvnsite  |   0m 30s |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  37m 11s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 13s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  24m 13s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2056 new + 0 unchanged - 
0 fixed = 2056 total (was 0)  |
   | +1 :green_heart: |  compile  |  17m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  17m 26s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1949 new + 0 unchanged - 
0 fixed = 1949 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 26s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 115m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 980a75217b31 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2276/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_26

[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479262&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479262
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 20:25
Start Date: 04/Sep/20 20:25
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on a change in pull request 
#2276:
URL: https://github.com/apache/hadoop/pull/2276#discussion_r483830038



##
File path: 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
##
@@ -3979,4 +3979,34 @@
   ThreadGroup approach is preferred for better performance.
 
   
+
+  
+fs.AbstractFileSystem.ofs.impl
+org.apache.hadoop.fs.ozone.RootedOzFs
+The AbstractFileSystem for Rooted Ozone
+  FileSystem ofs uri
+  
+
+  
+fs.ofs.impl
+org.apache.hadoop.fs.ozone.RootedOzoneFileSystem
+
+  The implementation class of the Rooted OzoneFileSystem
+
+  
+
+  
+fs.o3fs.impl

Review comment:
   Same a above.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479262)
Time Spent: 20m  (was: 10m)

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2276: HADOOP-17245. Add OzoneFileSystem classes to core-default.xml.

2020-09-04 Thread GitBox


umamaheswararao commented on a change in pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276#discussion_r483830038



##
File path: 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
##
@@ -3979,4 +3979,34 @@
   ThreadGroup approach is preferred for better performance.
 
   
+
+  
+fs.AbstractFileSystem.ofs.impl
+org.apache.hadoop.fs.ozone.RootedOzFs
+The AbstractFileSystem for Rooted Ozone
+  FileSystem ofs uri
+  
+
+  
+fs.ofs.impl
+org.apache.hadoop.fs.ozone.RootedOzoneFileSystem
+
+  The implementation class of the Rooted OzoneFileSystem
+
+  
+
+  
+fs.o3fs.impl

Review comment:
   Same a above.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-17245:

Description: 
When "ofs" is default, when running mapreduce job, YarnClient fails with below 
exception.
{code:java}
Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
scheme: ofs
 at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
 at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
 at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
 at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
 at java.security.AccessController.doPrivileged(Native Method){code}

Observed that o3fs is also not defined, will use this jira to add those too.

  was:
When "ofs" is default, when running mapreduce job, YarnClient fails with below 
exception.
{code:java}
Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
scheme: ofs
 at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
 at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
 at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
 at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
 at java.security.AccessController.doPrivileged(Native Method){code}


> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}
> Observed that o3fs is also not defined, will use this jira to add those too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #2276: HADOOP-17245. Add RootedOzFS AbstractFileSystem to core-default.xml.

2020-09-04 Thread GitBox


bharatviswa504 opened a new pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276


   Add RootedOzoneFs to core-default similar to other filesystems.
   https://issues.apache.org/jira/browse/HADOOP-17245
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17245:

Labels: pull-request-available  (was: )

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?focusedWorklogId=479218&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479218
 ]

ASF GitHub Bot logged work on HADOOP-17245:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 18:29
Start Date: 04/Sep/20 18:29
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 opened a new pull request #2276:
URL: https://github.com/apache/hadoop/pull/2276


   Add RootedOzoneFs to core-default similar to other filesystems.
   https://issues.apache.org/jira/browse/HADOOP-17245
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479218)
Remaining Estimate: 0h
Time Spent: 10m

> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HADOOP-17245:
---

 Summary: Add RootedOzFS AbstractFileSystem to core-default.xml
 Key: HADOOP-17245
 URL: https://issues.apache.org/jira/browse/HADOOP-17245
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


When "ofs" is default, when running mapreduce job, YarnClient fails with below 
exception.
Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
scheme: ofs
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
at 
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
at java.security.AccessController.doPrivileged(Native Method)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17245) Add RootedOzFS AbstractFileSystem to core-default.xml

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-17245:

Description: 
When "ofs" is default, when running mapreduce job, YarnClient fails with below 
exception.
{code:java}
Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
scheme: ofs
 at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
 at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
 at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
 at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
 at java.security.AccessController.doPrivileged(Native Method){code}

  was:
When "ofs" is default, when running mapreduce job, YarnClient fails with below 
exception.
Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
scheme: ofs
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
at 
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
at java.security.AccessController.doPrivileged(Native Method)


> Add RootedOzFS AbstractFileSystem to core-default.xml
> -
>
> Key: HADOOP-17245
> URL: https://issues.apache.org/jira/browse/HADOOP-17245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When "ofs" is default, when running mapreduce job, YarnClient fails with 
> below exception.
> {code:java}
> Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: 
> fs.AbstractFileSystem.ofs.impl=null: No AbstractFileSystem configured for 
> scheme: ofs
>  at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:176)
>  at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:265)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:341)
>  at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:338)
>  at java.security.AccessController.doPrivileged(Native Method){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on a change in pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-04 Thread GitBox


liuml07 commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r483780758



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -34,28 +34,35 @@
 @InterfaceStability.Unstable
 public enum StorageType {
   // sorted by the speed of the storage types, from fast to slow
-  RAM_DISK(true),
-  SSD(false),
-  DISK(false),
-  ARCHIVE(false),
-  PROVIDED(false);
+  RAM_DISK(true, true),
+  NVDIMM(false, true),
+  SSD(false, false),
+  DISK(false, false),
+  ARCHIVE(false, false),
+  PROVIDED(false, false);
 
   private final boolean isTransient;
+  private final boolean isRAM;
 
   public static final StorageType DEFAULT = DISK;
 
   public static final StorageType[] EMPTY_ARRAY = {};
 
   private static final StorageType[] VALUES = values();
 
-  StorageType(boolean isTransient) {
+  StorageType(boolean isTransient, boolean isRAM) {
 this.isTransient = isTransient;
+this.isRAM = isRAM;
   }
 
   public boolean isTransient() {
 return isTransient;
   }
 
+  public boolean isRAM() {
+return isRAM;
+  }

Review comment:
   Thanks for clarification, @YaYun-Wang I think now we both are on the 
same page. @brahmareddybattula Does this make sense to you? Thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17244) ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory test failure on -Dauth

2020-09-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17190874#comment-17190874
 ] 

Steve Loughran commented on HADOOP-17244:
-

{code}
[ERROR] 
testRenameDirectoryAsNonExistentDirectory(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations)
  Time elapsed: 10.733 s  <<< FAILURE!
java.lang.AssertionError: Source exists expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
org.apache.hadoop.fs.FileContextMainOperationsBaseTest.rename(FileContextMainOperationsBaseTest.java:1312)
at 
org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testRenameDirectoryAsNonExistentDirectory(FileContextMainOperationsBaseTest.java:1165)
at 
org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testRenameDirectoryAsNonExistentDirectory(FileContextMainOperationsBaseTest.java:1151)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
ITestS3AFileContextMainOperations>FileContextMainOperationsBaseTest.testRenameDirectoryAsNonExistentDirectory:1151->FileContextMainOperationsBaseTest.testRenameDirectoryAsNonExistentDirectory:1165->FileContextMainOperationsBaseTest.rename:1312
 Source exists expected: but was:
[INFO] 
{code}

> ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory 
> test failure on -Dauth
> --
>
> Key: HADOOP-17244
> URL: https://issues.apache.org/jira/browse/HADOOP-17244
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> Test failure: 
> {{ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory}}
> This is repeatable on -Dauth runs (we haven't been running them, have we?)
> Either its from the recent dir marker changes (initial hypothesis) or its 
> been lurking a while and not been picked up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17244) ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory test failure on -Dauth

2020-09-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17244:
---

 Summary: 
ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory 
test failure on -Dauth
 Key: HADOOP-17244
 URL: https://issues.apache.org/jira/browse/HADOOP-17244
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.1
Reporter: Steve Loughran
Assignee: Steve Loughran


Test failure: 
{{ITestS3AFileContextMainOperations#testRenameDirectoryAsNonExistentDirectory}}

This is repeatable on -Dauth runs (we haven't been running them, have we?)

Either its from the recent dir marker changes (initial hypothesis) or its been 
lurking a while and not been picked up.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2266: [RBF] HDFS-15554 Force router check file existence in destinations before adding/updating mount points

2020-09-04 Thread GitBox


fengnanli commented on a change in pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#discussion_r483767772



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
##
@@ -562,11 +595,35 @@ public GetDestinationResponse getDestination(
   LOG.error("Cannot get location for {}: {}",
   src, ioe.getMessage());
 }
-if (nsIds.isEmpty() && !locations.isEmpty()) {
-  String nsId = locations.get(0).getNameserviceId();
-  nsIds.add(nsId);
+return nsIds;
+  }
+
+  /**
+   * Verify the file exists in destination nameservices to avoid dangling
+   * mount points.
+   *
+   * @param entry the new mount points added, could be from add or update.
+   * @return destination nameservices where the file doesn't exist.
+   * @throws IOException
+   */
+  private List verifyFileInDestinations(MountTable entry)

Review comment:
   If that's the case, I will try to fix all in one batch.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16202) Enhance S3A openFile()

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=479199&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479199
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 17:41
Start Date: 04/Sep/20 17:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2168:
URL: https://github.com/apache/hadoop/pull/2168#issuecomment-687289875


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 48s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
7 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 52s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 29s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 18s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  24m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  7s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  22m  7s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 33s |  root: The patch generated 14 new 
+ 158 unchanged - 0 fixed = 172 total (was 158)  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 9 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  19m 24s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 39s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 53s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 223m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.contract.rawlocal.TestRawlocalContractOpen |
   |   | hadoop.fs.contract.localfs.TestLocalFSContractOpen |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2168 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux cfc95ba3564f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2168: HADOOP-16202. Enhance openFile()

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2168:
URL: https://github.com/apache/hadoop/pull/2168#issuecomment-687289875


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 48s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
7 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 43s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 52s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 29s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 18s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  24m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  7s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  22m  7s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 33s |  root: The patch generated 14 new 
+ 158 unchanged - 0 fixed = 172 total (was 158)  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 9 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  19m 24s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 39s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 53s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 223m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.contract.rawlocal.TestRawlocalContractOpen |
   |   | hadoop.fs.contract.localfs.TestLocalFSContractOpen |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2168 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux cfc95ba3564f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2168/9/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://ci-hadoop.ap

[GitHub] [hadoop] goiri commented on a change in pull request #2266: [RBF] HDFS-15554 Force router check file existence in destinations before adding/updating mount points

2020-09-04 Thread GitBox


goiri commented on a change in pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#discussion_r483760982



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
##
@@ -562,11 +595,35 @@ public GetDestinationResponse getDestination(
   LOG.error("Cannot get location for {}: {}",
   src, ioe.getMessage());
 }
-if (nsIds.isEmpty() && !locations.isEmpty()) {
-  String nsId = locations.get(0).getNameserviceId();
-  nsIds.add(nsId);
+return nsIds;
+  }
+
+  /**
+   * Verify the file exists in destination nameservices to avoid dangling
+   * mount points.
+   *
+   * @param entry the new mount points added, could be from add or update.
+   * @return destination nameservices where the file doesn't exist.
+   * @throws IOException
+   */
+  private List verifyFileInDestinations(MountTable entry)

Review comment:
   Eventually we may want to make all the tests be correct, but for now I'm 
fine setting up the config for just the new tests.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=479196&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479196
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 17:15
Start Date: 04/Sep/20 17:15
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687277527


   Thanks @steveloughran for review!
   In addition to his comments, I have the following question/comment.
   In practice, I am thinking this lazy initialization will not give us real 
benefit in the following case with MR/YARN.
   In DelegationTokenIssuer#collectDelegationTokens, it will try to get tokens 
from all children.
   
   ```
   // Now collect the tokens from the children.
   final DelegationTokenIssuer[] ancillary =
   issuer.getAdditionalTokenIssuers();
   if (ancillary != null) {
 for (DelegationTokenIssuer subIssuer : ancillary) {
   collectDelegationTokens(subIssuer, renewer, credentials, tokens);
 }
   } 
   ```
   
   I am wondering this call make all fs to be initialized. That means we are 
not getting this lazy initialization benefit fully. 
   Did you consider this case? 
   Is there a way to avoid this ? MR/YAN team can suggest something if we have 
alternative way to load tokens. 
   CC: @wangdatan  @sunilgovind @rohithsharmaks 
   CC: @xiaoyuyao 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479196)
Time Spent: 2h  (was: 1h 50m)

> ViewFS should initialize target filesystems lazily
> --
>
> Key: HADOOP-17028
> URL: https://issues.apache.org/jira/browse/HADOOP-17028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: client-mounts, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Currently viewFS initialize all configured target filesystems when 
> viewfs#init itself.
> Some target file system initialization involve creating heavy objects and 
> proxy connections. Ex: DistributedFileSystem#initialize will create DFSClient 
> object which will create proxy connections to NN etc.
> For example: if ViewFS configured with 10 target fs with hdfs uri and 2 
> targets with s3a.
> If one of the client only work with s3a target, But ViewFS will initialize 
> all targets irrespective of what clients interested to work with. That means, 
> here client will create 10 DFS initializations and 2 s3a initializations. Its 
> unnecessary to have DFS initialization here. So, it will be a good idea to 
> initialize the target fs only when first time usage call come to particular 
> target fs scheme. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2020-09-04 Thread GitBox


umamaheswararao commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687277527


   Thanks @steveloughran for review!
   In addition to his comments, I have the following question/comment.
   In practice, I am thinking this lazy initialization will not give us real 
benefit in the following case with MR/YARN.
   In DelegationTokenIssuer#collectDelegationTokens, it will try to get tokens 
from all children.
   
   ```
   // Now collect the tokens from the children.
   final DelegationTokenIssuer[] ancillary =
   issuer.getAdditionalTokenIssuers();
   if (ancillary != null) {
 for (DelegationTokenIssuer subIssuer : ancillary) {
   collectDelegationTokens(subIssuer, renewer, credentials, tokens);
 }
   } 
   ```
   
   I am wondering this call make all fs to be initialized. That means we are 
not getting this lazy initialization benefit fully. 
   Did you consider this case? 
   Is there a way to avoid this ? MR/YAN team can suggest something if we have 
alternative way to load tokens. 
   CC: @wangdatan  @sunilgovind @rohithsharmaks 
   CC: @xiaoyuyao 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2204: YARN-10393. Make the heartbeat request from NM to RM consistent across heartbeat ID.

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2204:
URL: https://github.com/apache/hadoop/pull/2204#issuecomment-687256507


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  32m  9s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  6s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 57s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 26s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 23s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 22s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 121 unchanged - 0 fixed = 122 total (was 121)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 10s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 51s |  hadoop-yarn-server-nodemanager in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 137m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2204/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2204 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 88b19e505584 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2204/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2204/1/testReport/ |
   | Max. process+thread count | 317 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2204/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   

[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=479180&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479180
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 16:17
Start Date: 04/Sep/20 16:17
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-687247574


   OK, despite my force push losing @jimmy-zuber-amzn 's comments, I agree with 
the points about thread safety. In my head I'd imagined that we'd build that 
implementation map once and then iterate over it, but I can see benefits in 
supporting dynamic addition of new values to both the snapshot and the dynamic 
ones.
   
   Snapshot: add an entry to the map
   Dynamic: add new atomic long etc entries to the appropriate map
   
   this would let us create a minimal snapshot then pass it around, and as it 
was passed around it would collect values, *without you needing to define up 
front all stats to collect*.  This work here needs to be lined up for that with
   iterators of maps being resilient to new values being added.
   
   For the dynamic stuff -> ConcurrentHashMap.
   For Snapshot, it's trickier as they need to be java serializable, so that 
Spark & can can forward them around. There I will have to do one of 
   
   *mark the maps all as transient, and then in read/write data actually save 
then restore the data as treemaps (or just arrays of entries)
   * Make accessors to the iterators synchronized and do a snapshot of the 
iterator. I think that will actually be the easiest approach...I just need to 
make sure the operations to update the maps are also synchronized



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479180)
Time Spent: 50m  (was: 40m)

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-09-04 Thread GitBox


steveloughran commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-687247574


   OK, despite my force push losing @jimmy-zuber-amzn 's comments, I agree with 
the points about thread safety. In my head I'd imagined that we'd build that 
implementation map once and then iterate over it, but I can see benefits in 
supporting dynamic addition of new values to both the snapshot and the dynamic 
ones.
   
   Snapshot: add an entry to the map
   Dynamic: add new atomic long etc entries to the appropriate map
   
   this would let us create a minimal snapshot then pass it around, and as it 
was passed around it would collect values, *without you needing to define up 
front all stats to collect*.  This work here needs to be lined up for that with
   iterators of maps being resilient to new values being added.
   
   For the dynamic stuff -> ConcurrentHashMap.
   For Snapshot, it's trickier as they need to be java serializable, so that 
Spark & can can forward them around. There I will have to do one of 
   
   *mark the maps all as transient, and then in read/write data actually save 
then restore the data as treemaps (or just arrays of entries)
   * Make accessors to the iterators synchronized and do a snapshot of the 
iterator. I think that will actually be the easiest approach...I just need to 
make sure the operations to update the maps are also synchronized



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=479174&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479174
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 16:08
Start Date: 04/Sep/20 16:08
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r483717630



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/iostatistics.md
##
@@ -0,0 +1,432 @@
+
+
+# Statistic collection with the IOStatistics API
+
+```java
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+```
+
+The `IOStatistics` API is intended to provide statistics on individual IO
+classes -such as input and output streams, *in a standard way which 
+applications can query*
+
+Many filesystem-related classes have implemented statistics gathering
+and provided private/unstable ways to query this, but as they were
+not common across implementations it was unsafe for applications
+to reference these values. Example: `S3AInputStream` and its statistics
+API. This is used in internal tests, but cannot be used downstream in
+applications such as Apache Hive or Apache HBase.
+
+The IOStatistics API is intended to 
+
+1. Be instance specific:, rather than shared across multiple instances
+   of a class, or thread local.
+1. Be public and stable enough to be used by applications.
+1. Be easy to use in applications written in Java, Scala, and, via libhdfs, 
C/C++
+1. Have foundational interfaces and classes in the `hadoop-common` JAR.
+
+## Core Model
+
+Any class *may* implement `IOStatisticsSource` in order to
+provide statistics.
+
+Wrapper I/O Classes such as `FSDataInputStream` anc `FSDataOutputStream` 
*should*
+implement the interface and forward it to the wrapped class, if they also
+implement it -and return `null` if they do not.
+
+`IOStatisticsSource` implementations `getIOStatistics()` return an
+instance of `IOStatistics` enumerating the statistics of that specific
+instance.
+
+The `IOStatistics` Interface exports five kinds of statistic:
+
+
+| Category | Type | Description |
+|--|--|-|
+| `counter`| `long`  | a counter which may increase in value; 
SHOULD BE >= 0 |
+| `gauge`  | `long`  | an arbitrary value which can down as 
well as up; SHOULD BE >= 0|
+| `minimum`| `long`  | an minimum value; MAY BE negative |
+| `maximum`| `long`  | a maximum value;  MAY BE negative |
+| `meanStatistic` | `MeanStatistic` | an arithmetic mean and sample size; mean 
MAY BE negative|
+
+Four are simple `long` values, with the variations how they are likely to
+change and how they are aggregated.
+
+
+ Aggregation of Statistic Values
+
+For the different statistic category, the result of `aggregate(x, y)` is
+
+| Category | Aggregation |
+|--|-|
+| `counter`| `min(0, x) + min(0, y)` |

Review comment:
   yeah -fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479174)
Time Spent: 40m  (was: 0.5h)

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper th

[GitHub] [hadoop] steveloughran commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-09-04 Thread GitBox


steveloughran commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r483717630



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/iostatistics.md
##
@@ -0,0 +1,432 @@
+
+
+# Statistic collection with the IOStatistics API
+
+```java
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+```
+
+The `IOStatistics` API is intended to provide statistics on individual IO
+classes -such as input and output streams, *in a standard way which 
+applications can query*
+
+Many filesystem-related classes have implemented statistics gathering
+and provided private/unstable ways to query this, but as they were
+not common across implementations it was unsafe for applications
+to reference these values. Example: `S3AInputStream` and its statistics
+API. This is used in internal tests, but cannot be used downstream in
+applications such as Apache Hive or Apache HBase.
+
+The IOStatistics API is intended to 
+
+1. Be instance specific:, rather than shared across multiple instances
+   of a class, or thread local.
+1. Be public and stable enough to be used by applications.
+1. Be easy to use in applications written in Java, Scala, and, via libhdfs, 
C/C++
+1. Have foundational interfaces and classes in the `hadoop-common` JAR.
+
+## Core Model
+
+Any class *may* implement `IOStatisticsSource` in order to
+provide statistics.
+
+Wrapper I/O Classes such as `FSDataInputStream` anc `FSDataOutputStream` 
*should*
+implement the interface and forward it to the wrapped class, if they also
+implement it -and return `null` if they do not.
+
+`IOStatisticsSource` implementations `getIOStatistics()` return an
+instance of `IOStatistics` enumerating the statistics of that specific
+instance.
+
+The `IOStatistics` Interface exports five kinds of statistic:
+
+
+| Category | Type | Description |
+|--|--|-|
+| `counter`| `long`  | a counter which may increase in value; 
SHOULD BE >= 0 |
+| `gauge`  | `long`  | an arbitrary value which can down as 
well as up; SHOULD BE >= 0|
+| `minimum`| `long`  | an minimum value; MAY BE negative |
+| `maximum`| `long`  | a maximum value;  MAY BE negative |
+| `meanStatistic` | `MeanStatistic` | an arithmetic mean and sample size; mean 
MAY BE negative|
+
+Four are simple `long` values, with the variations how they are likely to
+change and how they are aggregated.
+
+
+ Aggregation of Statistic Values
+
+For the different statistic category, the result of `aggregate(x, y)` is
+
+| Category | Aggregation |
+|--|-|
+| `counter`| `min(0, x) + min(0, y)` |

Review comment:
   yeah -fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=479172&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479172
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 16:05
Start Date: 04/Sep/20 16:05
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r483715827



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/CounterIOStatisticsBuilder.java
##
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+/**
+ * Builder of the CounterIOStatistics class.
+ */
+public interface CounterIOStatisticsBuilder {

Review comment:
   yeah, it's obsolete. Removed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479172)
Time Spent: 0.5h  (was: 20m)

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-09-04 Thread GitBox


steveloughran commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r483715827



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/CounterIOStatisticsBuilder.java
##
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics.impl;
+
+/**
+ * Builder of the CounterIOStatistics class.
+ */
+public interface CounterIOStatisticsBuilder {

Review comment:
   yeah, it's obsolete. Removed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=479171&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479171
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 16:00
Start Date: 04/Sep/20 16:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-680308475







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479171)
Time Spent: 20m  (was: 10m)

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2069: HADOOP-16830. IOStatistics API.

2020-09-04 Thread GitBox


hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-680308475







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=479170&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479170
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 15:59
Start Date: 04/Sep/20 15:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-681067758


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
38 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m  4s |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 25s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  22m 17s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 43s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 28s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 39s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 23s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  25m 23s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 3 new + 2054 unchanged - 
1 fixed = 2057 total (was 2055)  |
   | +1 :green_heart: |  compile  |  21m 38s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  21m 38s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 3 new + 1947 unchanged - 
1 fixed = 1950 total (was 1948)  |
   | -0 :warning: |  checkstyle  |   3m 51s |  root: The patch generated 19 new 
+ 258 unchanged - 26 fixed = 277 total (was 284)  |
   | -1 :x: |  mvnsite  |   0m 58s |  hadoop-mapreduce-client-core in the patch 
failed.  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 19s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 33s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. 
 |
   | +1 :green_heart: |  javadoc  |   0m 43s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 25s |  hadoop-common-project/hadoop-common 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 37s |  hadoop-common in the patch passed. 
 |
   | +1 :gr

[jira] [Updated] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-16830:

Labels: pull-request-available  (was: )

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2069: HADOOP-16830. IOStatistics API.

2020-09-04 Thread GitBox


hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-681067758


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
38 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m  4s |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 25s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  22m 17s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 43s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 28s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 39s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 23s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  25m 23s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 3 new + 2054 unchanged - 
1 fixed = 2057 total (was 2055)  |
   | +1 :green_heart: |  compile  |  21m 38s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  21m 38s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 3 new + 1947 unchanged - 
1 fixed = 1950 total (was 1948)  |
   | -0 :warning: |  checkstyle  |   3m 51s |  root: The patch generated 19 new 
+ 258 unchanged - 26 fixed = 277 total (was 284)  |
   | -1 :x: |  mvnsite  |   0m 58s |  hadoop-mapreduce-client-core in the patch 
failed.  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 19s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 33s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. 
 |
   | +1 :green_heart: |  javadoc  |   0m 43s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 25s |  hadoop-common-project/hadoop-common 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 37s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   7m  0s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 40s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 225m 25s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsisten

[jira] [Work logged] (HADOOP-17242) S3A (async) ObjectListingIterator to block in hasNext() for results

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17242?focusedWorklogId=479168&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479168
 ]

ASF GitHub Bot logged work on HADOOP-17242:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 15:51
Start Date: 04/Sep/20 15:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2273:
URL: https://github.com/apache/hadoop/pull/2273#issuecomment-687234185


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  32m 50s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 59s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 32s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  7s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 108m  1s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Unread field:Listing.java:[line 731] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2273/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2273 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux da1f9cec88e2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2273/2/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2273/2/testReport/ |
   | Max. process+thread count | 427 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/ha

[GitHub] [hadoop] hadoop-yetus commented on pull request #2273: HADOOP-17242. S3A ObjectListingIterator to block in hasNext() for results

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2273:
URL: https://github.com/apache/hadoop/pull/2273#issuecomment-687234185


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  32m 50s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 59s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 32s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  7s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 108m  1s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Unread field:Listing.java:[line 731] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2273/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2273 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux da1f9cec88e2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5346cc32637 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2273/2/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2273/2/testReport/ |
   | Max. process+thread count | 427 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2273/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To resp

[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=479133&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479133
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 14:27
Start Date: 04/Sep/20 14:27
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2260:
URL: https://github.com/apache/hadoop/pull/2260#discussion_r483645622



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
##
@@ -284,7 +288,20 @@ boolean isInternalDir() {
   return false;
 }
 
-public T getTargetFileSystem() {
+/**
+ * Gets lazily loaded instance of FileSystem

Review comment:
   nit: add a . to keep javadoc happy

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
##
@@ -284,7 +288,20 @@ boolean isInternalDir() {
   return false;
 }
 
-public T getTargetFileSystem() {
+/**
+ * Gets lazily loaded instance of FileSystem
+ * @return An Initialized instance of T
+ * @throws IOException
+ */
+public T getTargetFileSystem() throws IOException {
+  if (targetFileSystem != null)
+return targetFileSystem;

Review comment:
   nit, curly { }

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
##
@@ -284,7 +288,20 @@ boolean isInternalDir() {
   return false;
 }
 
-public T getTargetFileSystem() {
+/**
+ * Gets lazily loaded instance of FileSystem
+ * @return An Initialized instance of T
+ * @throws IOException
+ */
+public T getTargetFileSystem() throws IOException {
+  if (targetFileSystem != null)
+return targetFileSystem;
+
+  if (targetDirLinkList.length == 1) {
+synchronized (this) {
+  targetFileSystem = fileSystemInitFunc.apply(targetDirLinkList[0]);

Review comment:
   this will dual init as thread #2 will block. You need another check 
inside the sync block so the second thread won't repeat itself

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -893,6 +906,9 @@ public short getDefaultReplication(Path f) {
   return res.targetFileSystem.getDefaultReplication(res.remainingPath);
 } catch (FileNotFoundException e) {
   throw new NotInMountpointException(f, "getDefaultReplication"); 
+} catch (IOException e) {
+  throw new RuntimeException("Not able to initialize fs in "

Review comment:
   FYI. I've got a WrappedIOException, but in #2069 making it a public 
`org.apache.hadoop.fs.functional.RuntimeIOException` whose cause is only ever 
IOE. Not something to be picked up here (yet), but worth knowing. I'm trying to 
make the functional & lambda/expression stuff more useable, 

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -855,7 +861,11 @@ public void setVerifyChecksum(final boolean 
verifyChecksum) {
 List> mountPoints = 
 fsState.getMountPoints();
 for (InodeTree.MountPoint mount : mountPoints) {
-  mount.target.targetFileSystem.setVerifyChecksum(verifyChecksum);
+  try {
+mount.target.getTargetFileSystem().setVerifyChecksum(verifyChecksum);
+  } catch (IOException ex) {
+LOG.error("Could not set verifyChecksum for source path " + mount.src);
+  }

Review comment:
   log the full stack trace

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -936,8 +956,13 @@ public void setWriteChecksum(final boolean writeChecksum) {
 fsState.getMountPoints();
 Set children = new HashSet();
 for (InodeTree.MountPoint mountPoint : mountPoints) {
-  FileSystem targetFs = mountPoint.target.targetFileSystem;
-  children.addAll(Arrays.asList(targetFs.getChildFileSystems()));
+  try {
+FileSystem targetFs = mountPoint.target.getTargetFileSystem();
+children.addAll(Arrays.asList(targetFs.getChildFileSystems()));
+  } catch (IOException ex) {
+LOG.error("Could not add child filesystems "
++ "for source path " + mountPoint.src);

Review comment:
   +full exception log





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479133)
Time 

[GitHub] [hadoop] steveloughran commented on a change in pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2020-09-04 Thread GitBox


steveloughran commented on a change in pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#discussion_r483645622



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
##
@@ -284,7 +288,20 @@ boolean isInternalDir() {
   return false;
 }
 
-public T getTargetFileSystem() {
+/**
+ * Gets lazily loaded instance of FileSystem

Review comment:
   nit: add a . to keep javadoc happy

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
##
@@ -284,7 +288,20 @@ boolean isInternalDir() {
   return false;
 }
 
-public T getTargetFileSystem() {
+/**
+ * Gets lazily loaded instance of FileSystem
+ * @return An Initialized instance of T
+ * @throws IOException
+ */
+public T getTargetFileSystem() throws IOException {
+  if (targetFileSystem != null)
+return targetFileSystem;

Review comment:
   nit, curly { }

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
##
@@ -284,7 +288,20 @@ boolean isInternalDir() {
   return false;
 }
 
-public T getTargetFileSystem() {
+/**
+ * Gets lazily loaded instance of FileSystem
+ * @return An Initialized instance of T
+ * @throws IOException
+ */
+public T getTargetFileSystem() throws IOException {
+  if (targetFileSystem != null)
+return targetFileSystem;
+
+  if (targetDirLinkList.length == 1) {
+synchronized (this) {
+  targetFileSystem = fileSystemInitFunc.apply(targetDirLinkList[0]);

Review comment:
   this will dual init as thread #2 will block. You need another check 
inside the sync block so the second thread won't repeat itself

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -893,6 +906,9 @@ public short getDefaultReplication(Path f) {
   return res.targetFileSystem.getDefaultReplication(res.remainingPath);
 } catch (FileNotFoundException e) {
   throw new NotInMountpointException(f, "getDefaultReplication"); 
+} catch (IOException e) {
+  throw new RuntimeException("Not able to initialize fs in "

Review comment:
   FYI. I've got a WrappedIOException, but in #2069 making it a public 
`org.apache.hadoop.fs.functional.RuntimeIOException` whose cause is only ever 
IOE. Not something to be picked up here (yet), but worth knowing. I'm trying to 
make the functional & lambda/expression stuff more useable, 

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -855,7 +861,11 @@ public void setVerifyChecksum(final boolean 
verifyChecksum) {
 List> mountPoints = 
 fsState.getMountPoints();
 for (InodeTree.MountPoint mount : mountPoints) {
-  mount.target.targetFileSystem.setVerifyChecksum(verifyChecksum);
+  try {
+mount.target.getTargetFileSystem().setVerifyChecksum(verifyChecksum);
+  } catch (IOException ex) {
+LOG.error("Could not set verifyChecksum for source path " + mount.src);
+  }

Review comment:
   log the full stack trace

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -936,8 +956,13 @@ public void setWriteChecksum(final boolean writeChecksum) {
 fsState.getMountPoints();
 Set children = new HashSet();
 for (InodeTree.MountPoint mountPoint : mountPoints) {
-  FileSystem targetFs = mountPoint.target.targetFileSystem;
-  children.addAll(Arrays.asList(targetFs.getChildFileSystems()));
+  try {
+FileSystem targetFs = mountPoint.target.getTargetFileSystem();
+children.addAll(Arrays.asList(targetFs.getChildFileSystems()));
+  } catch (IOException ex) {
+LOG.error("Could not add child filesystems "
++ "for source path " + mountPoint.src);

Review comment:
   +full exception log





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17227) improve s3guard markers command line tool

2020-09-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17227.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

Fixed in branch-3.3

> improve s3guard markers command line tool
> -
>
> Key: HADOOP-17227
> URL: https://issues.apache.org/jira/browse/HADOOP-17227
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The s3guard markers audit -expected N command is meant to verify that the 
> marker count is N, but
> * it isn't verified if the #of markers is 0, so you can't use it to assert 
> that markers have been created
> * it doesn't work for tests where you expect a minimum number of markers. 
> It's essentially setting a max #of markers
> Proposed: explicit -min, -max args to declare a specific range of values



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17167) ITestS3AEncryptionWithDefaultS3Settings fails if default bucket encryption != KMS

2020-09-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17167.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

in trunk / branch-3; not yet tested myself to be confident all is good. 

> ITestS3AEncryptionWithDefaultS3Settings fails if default bucket encryption != 
> KMS
> -
>
> Key: HADOOP-17167
> URL: https://issues.apache.org/jira/browse/HADOOP-17167
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The test ITestS3AEncryptionWithDefaultS3Settings fails if
> * the test run is set up with a KMS key
> * the test bucket has a default encryption of AES (maybe also unencrypted)
> Proposed: downgrade the test to skip if the default encryption is determined 
> to be something other than S3-KMS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17227) improve s3guard markers command line tool

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17227?focusedWorklogId=479116&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479116
 ]

ASF GitHub Bot logged work on HADOOP-17227:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 13:58
Start Date: 04/Sep/20 13:58
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2254:
URL: https://github.com/apache/hadoop/pull/2254


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479116)
Time Spent: 2h 10m  (was: 2h)

> improve s3guard markers command line tool
> -
>
> Key: HADOOP-17227
> URL: https://issues.apache.org/jira/browse/HADOOP-17227
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The s3guard markers audit -expected N command is meant to verify that the 
> marker count is N, but
> * it isn't verified if the #of markers is 0, so you can't use it to assert 
> that markers have been created
> * it doesn't work for tests where you expect a minimum number of markers. 
> It's essentially setting a max #of markers
> Proposed: explicit -min, -max args to declare a specific range of values



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2254: HADOOP-17227. Marker Tool tuning

2020-09-04 Thread GitBox


steveloughran merged pull request #2254:
URL: https://github.com/apache/hadoop/pull/2254


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] imbajin commented on a change in pull request #2265: HDFS-15551. Tiny Improve for DeadNode detector

2020-09-04 Thread GitBox


imbajin commented on a change in pull request #2265:
URL: https://github.com/apache/hadoop/pull/2265#discussion_r483619060



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
##
@@ -475,6 +475,7 @@ public synchronized void addNodeToDetect(DFSInputStream 
dfsInputStream,
   datanodeInfos.add(datanodeInfo);
 }
 
+LOG.warn("Add datanode {} to suspectAndDeadNodes", datanodeInfo);

Review comment:
   use DEBUG as default now





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2265: HDFS-15551. Tiny Improve for DeadNode detector

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2265:
URL: https://github.com/apache/hadoop/pull/2265#issuecomment-687104767


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m  7s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m 55s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 51s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 26s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 24s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 26s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  4s |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 104m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2265/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2265 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 53e09050e647 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 696e4fe50e4 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2265/2/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2265/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apac

[jira] [Work logged] (HADOOP-17167) ITestS3AEncryptionWithDefaultS3Settings fails if default bucket encryption != KMS

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17167?focusedWorklogId=479076&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479076
 ]

ASF GitHub Bot logged work on HADOOP-17167:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 12:04
Start Date: 04/Sep/20 12:04
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on pull request #2187:
URL: https://github.com/apache/hadoop/pull/2187#issuecomment-687101182


   Thanks 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479076)
Time Spent: 40m  (was: 0.5h)

> ITestS3AEncryptionWithDefaultS3Settings fails if default bucket encryption != 
> KMS
> -
>
> Key: HADOOP-17167
> URL: https://issues.apache.org/jira/browse/HADOOP-17167
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The test ITestS3AEncryptionWithDefaultS3Settings fails if
> * the test run is set up with a KMS key
> * the test bucket has a default encryption of AES (maybe also unencrypted)
> Proposed: downgrade the test to skip if the default encryption is determined 
> to be something other than S3-KMS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2187: HADOOP-17167 Skipping ITestS3AEncryptionWithDefaultS3Settings.testEncryptionOverRename

2020-09-04 Thread GitBox


mukund-thakur commented on pull request #2187:
URL: https://github.com/apache/hadoop/pull/2187#issuecomment-687101182


   Thanks 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=479067&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479067
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 11:09
Start Date: 04/Sep/20 11:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687080093


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  4s |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 17s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  7s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 18s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 16s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 29s |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   1m  1s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   1m  1s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 50s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 50s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 33s |  
hadoop-common-project/hadoop-common: The patch generated 5 new + 159 unchanged 
- 4 fixed = 164 total (was 163)  |
   | -1 :x: |  mvnsite  |   0m 31s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedclient  |   0m 40s |  patch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   0m 28s |  hadoop-common in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 29s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 105m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2260 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 49a2b68f61f9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 696e4fe50e4 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/patch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | javac | 
https://ci-hadoop

[GitHub] [hadoop] hadoop-yetus commented on pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687080093


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  4s |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 17s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  7s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 18s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 16s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 29s |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   1m  1s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   1m  1s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 50s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 50s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 33s |  
hadoop-common-project/hadoop-common: The patch generated 5 new + 159 unchanged 
- 4 fixed = 164 total (was 163)  |
   | -1 :x: |  mvnsite  |   0m 31s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedclient  |   0m 40s |  patch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   0m 28s |  hadoop-common in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 29s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 105m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2260 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 49a2b68f61f9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 696e4fe50e4 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/patch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/patch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/5/artifact/out/patch-compile-root-jdkPrivat

[GitHub] [hadoop] hadoop-yetus commented on pull request #2275: HDFS-15558: ViewDistributedFileSystem#recoverLease should call super.recoverLease when there are no mounts configured

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2275:
URL: https://github.com/apache/hadoop/pull/2275#issuecomment-687077465


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 11s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 55s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 50s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   3m 55s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 19s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 22s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 30s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 36s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 59s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   3m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 37s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   3m 37s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 10s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 46s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  6s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  98m 15s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 215m 20s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2275/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2275 |
   | JIRA Issue | HDFS-15558 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 54631ccca339 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 139a43e98e2 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2275/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
http

[GitHub] [hadoop] YaYun-Wang commented on a change in pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-04 Thread GitBox


YaYun-Wang commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r477304478



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
##
@@ -77,6 +77,9 @@
   /** Returns true if the volume is NOT backed by persistent storage. */
   boolean isTransientStorage();

Review comment:
   The "isTransientStorage" method is still available. 
   In the original code,` isTransient()` and `isTransientStorage` methods are 
used to determine whether to support FsDatasetCache, Persistent, Quota, and 
Movable. 
   FsDatasetCache will be used When the storage type is persistent. NVDIMM is 
RAM to some extent, which is fast. However, NVDIMM is a persistent storage 
type.  Then, `isTransient()` and `isTransientStorage() ` used to determine 
whether to support FsDatasetCache can't meet the requirements. Therefore, we 
add  `isRAM()` and `isRAMStorage() ` methods to decide whether cache is 
supported or not. And the other functions, such as, Persistent,  Quota, and 
Movable judged by`isTransient()` and `isTransientStorage() ` methods.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] YaYun-Wang commented on a change in pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-04 Thread GitBox


YaYun-Wang commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r483515521



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -34,28 +34,35 @@
 @InterfaceStability.Unstable
 public enum StorageType {
   // sorted by the speed of the storage types, from fast to slow
-  RAM_DISK(true),
-  SSD(false),
-  DISK(false),
-  ARCHIVE(false),
-  PROVIDED(false);
+  RAM_DISK(true, true),
+  NVDIMM(false, true),
+  SSD(false, false),
+  DISK(false, false),
+  ARCHIVE(false, false),
+  PROVIDED(false, false);
 
   private final boolean isTransient;
+  private final boolean isRAM;
 
   public static final StorageType DEFAULT = DISK;
 
   public static final StorageType[] EMPTY_ARRAY = {};
 
   private static final StorageType[] VALUES = values();
 
-  StorageType(boolean isTransient) {
+  StorageType(boolean isTransient, boolean isRAM) {
 this.isTransient = isTransient;
+this.isRAM = isRAM;
   }
 
   public boolean isTransient() {
 return isTransient;
   }
 
+  public boolean isRAM() {
+return isRAM;
+  }

Review comment:
   Sorry, I made a big mistake in the above reply. Our design idea is: 
NVDIMM supports mover and balancer, and `isTransient() ` applied in the case.` 
isRAM()` is only used to “FsDatasetCache” judgment. So the current code is 
reasonable, i think it‘s not necessary to modify the code.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] YaYun-Wang commented on a change in pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-04 Thread GitBox


YaYun-Wang commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r483515521



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -34,28 +34,35 @@
 @InterfaceStability.Unstable
 public enum StorageType {
   // sorted by the speed of the storage types, from fast to slow
-  RAM_DISK(true),
-  SSD(false),
-  DISK(false),
-  ARCHIVE(false),
-  PROVIDED(false);
+  RAM_DISK(true, true),
+  NVDIMM(false, true),
+  SSD(false, false),
+  DISK(false, false),
+  ARCHIVE(false, false),
+  PROVIDED(false, false);
 
   private final boolean isTransient;
+  private final boolean isRAM;
 
   public static final StorageType DEFAULT = DISK;
 
   public static final StorageType[] EMPTY_ARRAY = {};
 
   private static final StorageType[] VALUES = values();
 
-  StorageType(boolean isTransient) {
+  StorageType(boolean isTransient, boolean isRAM) {
 this.isTransient = isTransient;
+this.isRAM = isRAM;
   }
 
   public boolean isTransient() {
 return isTransient;
   }
 
+  public boolean isRAM() {
+return isRAM;
+  }

Review comment:
   Sorry, I made a big mistake in the above reply. Our design idea is: 
NVDIMM supports mover and balancer, and `isTransient() ` applied in the case. 
isRAM() is only used to “FsDatasetCache” judgment. So the current code is 
reasonable, i think it‘s not necessary to modify the code.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=479031&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479031
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 09:20
Start Date: 04/Sep/20 09:20
Worklog Time Spent: 10m 
  Work Description: abhishekdas99 commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687031356


   > I like the lazy eval style
   > 
   > * thread safety: what is the story here?
   > * what's the story w.r.t IOExceptions being raised in the init function?
   > 
   > in org.apache.hadoop.fs.impl.FunctionsRaisingIOE we have an interface for 
functions which explicilty raise them. Using that or something like it makes 
sense here
   
   Thanks @steveloughran for the review. 
   I have incorporated FunctionsRaisingIOE to take care of IOException. Thanks 
for the pointer. Also added synchronized block around initialization of file 
system instance.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479031)
Time Spent: 1.5h  (was: 1h 20m)

> ViewFS should initialize target filesystems lazily
> --
>
> Key: HADOOP-17028
> URL: https://issues.apache.org/jira/browse/HADOOP-17028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: client-mounts, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently viewFS initialize all configured target filesystems when 
> viewfs#init itself.
> Some target file system initialization involve creating heavy objects and 
> proxy connections. Ex: DistributedFileSystem#initialize will create DFSClient 
> object which will create proxy connections to NN etc.
> For example: if ViewFS configured with 10 target fs with hdfs uri and 2 
> targets with s3a.
> If one of the client only work with s3a target, But ViewFS will initialize 
> all targets irrespective of what clients interested to work with. That means, 
> here client will create 10 DFS initializations and 2 s3a initializations. Its 
> unnecessary to have DFS initialization here. So, it will be a good idea to 
> initialize the target fs only when first time usage call come to particular 
> target fs scheme. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] abhishekdas99 commented on pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2020-09-04 Thread GitBox


abhishekdas99 commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687031356


   > I like the lazy eval style
   > 
   > * thread safety: what is the story here?
   > * what's the story w.r.t IOExceptions being raised in the init function?
   > 
   > in org.apache.hadoop.fs.impl.FunctionsRaisingIOE we have an interface for 
functions which explicilty raise them. Using that or something like it makes 
sense here
   
   Thanks @steveloughran for the review. 
   I have incorporated FunctionsRaisingIOE to take care of IOException. Thanks 
for the pointer. Also added synchronized block around initialization of file 
system instance.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=479029&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479029
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 09:18
Start Date: 04/Sep/20 09:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687030524


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  4s |  https://github.com/apache/hadoop/pull/2260 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2260 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/4/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479029)
Time Spent: 1h 20m  (was: 1h 10m)

> ViewFS should initialize target filesystems lazily
> --
>
> Key: HADOOP-17028
> URL: https://issues.apache.org/jira/browse/HADOOP-17028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: client-mounts, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently viewFS initialize all configured target filesystems when 
> viewfs#init itself.
> Some target file system initialization involve creating heavy objects and 
> proxy connections. Ex: DistributedFileSystem#initialize will create DFSClient 
> object which will create proxy connections to NN etc.
> For example: if ViewFS configured with 10 target fs with hdfs uri and 2 
> targets with s3a.
> If one of the client only work with s3a target, But ViewFS will initialize 
> all targets irrespective of what clients interested to work with. That means, 
> here client will create 10 DFS initializations and 2 s3a initializations. Its 
> unnecessary to have DFS initialization here. So, it will be a good idea to 
> initialize the target fs only when first time usage call come to particular 
> target fs scheme. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-687030524


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  4s |  https://github.com/apache/hadoop/pull/2260 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2260 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2260/4/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2020-09-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=479027&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-479027
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 04/Sep/20 09:17
Start Date: 04/Sep/20 09:17
Worklog Time Spent: 10m 
  Work Description: abhishekdas99 commented on a change in pull request 
#2260:
URL: https://github.com/apache/hadoop/pull/2260#discussion_r483495855



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
##
@@ -284,7 +290,15 @@ boolean isInternalDir() {
   return false;
 }
 
-public T getTargetFileSystem() {
+public T getTargetFileSystem() throws IOException {

Review comment:
   Added javadoc. Also added synchronized block around apply call to make 
sure only one thread is initializing the file system instance.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 479027)
Time Spent: 1h 10m  (was: 1h)

> ViewFS should initialize target filesystems lazily
> --
>
> Key: HADOOP-17028
> URL: https://issues.apache.org/jira/browse/HADOOP-17028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: client-mounts, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently viewFS initialize all configured target filesystems when 
> viewfs#init itself.
> Some target file system initialization involve creating heavy objects and 
> proxy connections. Ex: DistributedFileSystem#initialize will create DFSClient 
> object which will create proxy connections to NN etc.
> For example: if ViewFS configured with 10 target fs with hdfs uri and 2 
> targets with s3a.
> If one of the client only work with s3a target, But ViewFS will initialize 
> all targets irrespective of what clients interested to work with. That means, 
> here client will create 10 DFS initializations and 2 s3a initializations. Its 
> unnecessary to have DFS initialization here. So, it will be a good idea to 
> initialize the target fs only when first time usage call come to particular 
> target fs scheme. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] abhishekdas99 commented on a change in pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2020-09-04 Thread GitBox


abhishekdas99 commented on a change in pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#discussion_r483495855



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
##
@@ -284,7 +290,15 @@ boolean isInternalDir() {
   return false;
 }
 
-public T getTargetFileSystem() {
+public T getTargetFileSystem() throws IOException {

Review comment:
   Added javadoc. Also added synchronized block around apply call to make 
sure only one thread is initializing the file system instance.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao opened a new pull request #2275: HDFS-15558: ViewDistributedFileSystem#recoverLease should call super.recoverLease when there are no mounts configured

2020-09-04 Thread GitBox


umamaheswararao opened a new pull request #2275:
URL: https://github.com/apache/hadoop/pull/2275


   https://issues.apache.org/jira/browse/HDFS-15558
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] YaYun-Wang commented on a change in pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-04 Thread GitBox


YaYun-Wang commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r483410183



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
##
@@ -34,28 +34,35 @@
 @InterfaceStability.Unstable
 public enum StorageType {
   // sorted by the speed of the storage types, from fast to slow
-  RAM_DISK(true),
-  SSD(false),
-  DISK(false),
-  ARCHIVE(false),
-  PROVIDED(false);
+  RAM_DISK(true, true),
+  NVDIMM(false, true),
+  SSD(false, false),
+  DISK(false, false),
+  ARCHIVE(false, false),
+  PROVIDED(false, false);
 
   private final boolean isTransient;
+  private final boolean isRAM;
 
   public static final StorageType DEFAULT = DISK;
 
   public static final StorageType[] EMPTY_ARRAY = {};
 
   private static final StorageType[] VALUES = values();
 
-  StorageType(boolean isTransient) {
+  StorageType(boolean isTransient, boolean isRAM) {
 this.isTransient = isTransient;
+this.isRAM = isRAM;
   }
 
   public boolean isTransient() {
 return isTransient;
   }
 
+  public boolean isRAM() {
+return isRAM;
+  }

Review comment:
   NVDIMM is special RAM, the data above can be stored persistently. It can 
be regarded as a general hardware device. We don't have to consider what 
storage type it is, balancer and mover can be applied on NVDIMM, therefore, I 
think it is better to use isRAM to determine whether to use mover . In 
addition, neither RAM nor nvdimm need FsDatasetCache, and isTransient() used to 
determine whether FsDatasetCache is needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2266: [RBF] HDFS-15554 Force router check file existence in destinations before adding/updating mount points

2020-09-04 Thread GitBox


hadoop-yetus commented on pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#issuecomment-686959347


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 49s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 36s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 10s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1
 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 30 
unchanged - 2 fixed = 30 total (was 32)  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 26s |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new 
+ 30 unchanged - 2 fixed = 30 total (was 32)  |
   | -0 :warning: |  checkstyle  |   0m 16s |  
hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 16 new + 0 unchanged - 
0 fixed = 16 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 38s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 35s |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  87m 37s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterAdminCLI |
   |   | hadoop.hdfs.server.federation.router.TestRouterMountTableCacheRefresh |
   |   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
   |   | hadoop.hdfs.server.federation.router.TestRouterAllResolver |
   |   | 
hadoop.hdfs.server.federation.router.TestRouterMountTableCacheRefreshSecure |
   |   | hadoop.hdfs.server.federation.router.TestDisableNameservices |
   |   | hadoop.hdfs.server.federation.router.TestRouterFsck |
   |   | hadoop.hdfs.server.federation.router.TestRouterQuota |
   |   | hadoop.hdfs.server.federation.router.TestRouterMissingFolderMulti |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2266/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2266 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux e0227bc6a6f2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 139a

[GitHub] [hadoop] leosunli commented on a change in pull request #2265: HDFS-15551. Tiny Improve for DeadNode detector

2020-09-04 Thread GitBox


leosunli commented on a change in pull request #2265:
URL: https://github.com/apache/hadoop/pull/2265#discussion_r483384047



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
##
@@ -396,13 +395,13 @@ private void probeCallBack(Probe probe, boolean success) {
 probe.getDatanodeInfo());
 removeDeadNode(probe.getDatanodeInfo());
   } else if (probe.getType() == ProbeType.CHECK_SUSPECT) {
-LOG.debug("Remove the node out from suspect node list: {}.",
+LOG.info("Remove the node out from suspect node list: {}.",

Review comment:
when a lot of stale relicas,  it should have many supsect nodes  but 
not dead nodes.
   These nodes all will print this log.
   What is the purpose of printing this log? 
   The client can access normally the suspect node.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DeadNodeDetector.java
##
@@ -475,6 +475,7 @@ public synchronized void addNodeToDetect(DFSInputStream 
dfsInputStream,
   datanodeInfos.add(datanodeInfo);
 }
 
+LOG.warn("Add datanode {} to suspectAndDeadNodes", datanodeInfo);

Review comment:
   One case: when a lot of stale relicas, will the log flood?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org